itramblings

Ramblings from an IT manager and long time developer.

By

Freenas and shell commands for restarting AD connection

Here is a quick set of shell commands to reset and enable AD on Freenas 9.x

service ix-kerberos stop
service ix-nsswitch stop
service ix-kinit stop
service ix-activedirectory stop
service ix-pam stop
service ix-cache stop

sqlite3 /data/freenas-v1.db "update directoryservice_activedirectory set ad_enable=1;"
echo $?
service ix-kerberos start
service ix-nsswitch start
service ix-kinit start
service ix-kinit status
echo $?
klist

python /usr/local/www/freenasUI/middleware/notifier.py start cifs
service ix-activedirectory start
service ix-activedirectory status
echo $?
python /usr/local/www/freenasUI/middleware/notifier.py restart cifs
service ix-pam start
service ix-cache start &

 

By

From 32 to 2 ports: Ideal SATA/SAS Controllers for ZFS & Linux MD RAID

Absolutely fantastic post on controllers for home storage systems can be found here

I need a lot of reliable and cheap storage space (media collection, backups). Hardware RAID tends to be expensive and clunky. I recognize quite a few advantages in ZFS on Solaris/FreeBSD, and Linux MD RAID:

  • Performance. In many cases they are as fast as hardware RAID, and sometimes faster because the OS is aware of the RAID layout and can optimize I/O patterns for it. Indeed, even the most compute intensive RAID5 or 6 parity calculations take negligible CPU time on a modern processor. For a concrete example, Linux 2.6.32 on a Phenom II X4 945 3.0GHz computes RAID6 parity at close to 8 GB/s on a single core (check dmesg: “raid6: using algorithm sse2x4 (7976 MB/s)”). So achieving a throughput of 500 MB/s on a Linux MD raid6 array requires spending less than 1.5% CPU time computing parity. Now regarding the optimized I/O patterns, here is an interesting anecdote: one of the steps that Youtube took in its early days to scale their infrastructure up was to switch from hardware RAID to software RAID on their database server. They noticed a 20-30% increase in I/O throughput. Watch Seattle Conference on Scalability: YouTube Scalability @ 34’50”.
  • Scalability. ZFS and Linux MD RAID allow building arrays across multiple disk controllers, or multiple SAN devices, alleviating throughput bottlenecks that can arise on PCIe links, or GbE links. Whereas hardware RAID is restricted to a single controller, with no room for expansion.
  • Reliability. No hardware RAID = one less hardware component that can fail.
  • Ease of recoverability. The data can be recovered by putting the disks in any server. There is no reliance on a particular model of RAID controller.
  • Flexibility. It is possible to create arrays on any disk on any type of controller in the system, or to move disks from one controller to another.
  • Ease of administration. There is only one software interface to learn: zpool(1M) or mdadm(8). No need to install proprietary vendor tools, or to reboot into BIOSes to manage arrays.
  • Cost. Obviously cheaper since there is no hardware RAID controller to buy.

Consequently, many ZFS and Linux MD RAID users, such as me, look for non-RAID controllers that are simply reliable, fast, cheap, and otherwise come with no bells and whistles. Most motherboards have up to 4 or 6 onboard ports (be sure to always enable AHCI mode in the BIOS as it is the best designed hardware interface that a chip can present to the OS to enable maximum performance), but for more than 4 or 6 disks, there are surprisingly not that many choices of controllers. Over the years, I have spent quite some time on the controllers manufacturers’ websites, the LKML, linux-ide and ZFS mailing lists, and have established a list of SATA/SAS controllers that are ideal for ZFS or Linux MD RAID. I also included links to online retailers because some of these controllers are not that easy to find online.

The reason the list contains SAS controllers is because they are just as good as an option as SATA controllers: many of them are as inexpensive as SATA controllers (even though they target the enterprise market), they are fully compatible with SATA 3Gbps and 6Gbps disks, and they support all the usual features: hotplug, queueing, etc. A SAS controller typically present SFF-8087 connectors, also known as internal mini SAS, or even iPASS connectors. Up to 4 SATA drives can be connected to such a connector with an SFF-8087 to 4xSATA forward breakout cable (as opposed to reverse breakout). This type of cable usually sells for $15-30. Here are a few links if you have trouble finding them.

There are really only 4 significant manufacturers of discrete non-RAID SATA/SAS controller chips on the market: LSI, Marvell, JMicron, and Silicon Image. Controller cards from Adaptec, Areca, HighPoint, Intel, Supermicro, Tyan, etc, most often use chips from one of these 4 manufacturers.

Here is my list of non-RAID SATA/SAS controllers, from 16-port to 2-port controllers, with the kernel driver used to support them under Linux, and Solaris. There is also limited information on FreeBSD support. I focused on native PCIe controllers only, with very few PCI-X (actually only 1 very popular: 88SX6081). The MB/s/port number in square brackets indicates the maximum practical throughput that can be expected from each SATA port, assuming concurrent I/O on all ports, given the bottleneck of the host link or bus (PCIe or PCI-X). I assumed for all PCIe controllers that only 60-70% of the maximum theoretical PCIe throughput can be achieved, and for all PCI-X controllers that only 80% of the maximum theoretical PCI-X throughput can be achieved on this bus. These assumptions concur with what I have seen in real world benchmarks assuming a Max_Payload_Size setting of either 128 or 256 bytes for PCIe (a common default value), and a more or less default PCI latency timer setting for PCI-X. As of May 2010, modern disks can easily reach 120-130MB/s of sequential throughput at the beginning of the platter, so avoid controllers with a throughput of less than 150MB/s/port if you want to reduce the possibility of bottlenecks to zero.

32 ports

  • [SAS] 4 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [150-175MB/s/port]
    [Update 2011-09-29: Availability: $850 $850. This is a HighPoint HBA combining 4 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2782.]

    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

24 ports

  • [SAS] 3 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [200-233MB/s/port]
    [Update 2011-09-29: Availability: $540 $620. This is a HighPoint HBA combining 3 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2760A.]

    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

16 ports

  • [SAS] LSI SAS2116, 6Gbps, PCIe (gen2) x8 [150-175MB/s/port]
    Availability: $400 $510. LSI HBA based on this chip: LSISAS9200-16e, LSISAS9201-16i. [Update 2010-10-27: only the model with external ports used to be available but now the one with internal ports is available and less expensive.]

  • [SAS] 2 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [300-350MB/s/port]
    [Update 2011-09-29: Availability: $450 $480. This is a HighPoint HBA combining 3 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2740 and 2744.]

    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

8 ports

  • [SAS] Marvell 88SE9485 or 88SE9480, 6Gbps, PCIe (gen2) x8 [300-350MB/s/port]
    Availability: $280. [Update 2011-07-01: Supermicro HBA based on this chip: AOC-SAS2LP-MV8]. Areca HBA based on the 9480: ARC-1320. HighPoint HBA based on the 9485: RocketRAID 272x. Lots of bandwidth available to each port. However it is currently not supported by Solaris. I would recommend the LSI SAS2008 instead, which is cheaper, better supported, and provides just as much bandwidth.

    • Linux support: mvsas (94xx: 2.6.31+, ARC-1320: 2.6.32+)
    • Solaris support: not supported (see 88SE6480)
    • Mac OS X support: [Update 2014-06-26: the only 88SE9485 or 88SE9480 HBAs supported by Mountain Lion (10.8) and up seem to be HighPoint HBAs]
  • [SAS] LSI SAS2008, 6Gbps, PCIe (gen2) x8 [300-350MB/s/port]
    Availability: $130 $140 $180 $220 $220 $230 $240 $290. [Update 2010-12-21: Intel HBA based on this chip: RS2WC080]. Supermicro HBAs based on this chip: AOC-USAS2-L8i AOC-USAS2-L8e (these are 2 “UIO” cards with the electronic components mounted on the other side of the PCB which may not be mechanically compatible with all chassis). LSI HBAs based on this chip: LSISAS9200-8e LSISAS9210-8i LSISAS9211-8i LSISAS9212-4i4e. Lots of bandwidth per port. Good Linux and Solaris support.

  • [SAS] LSI SAS1068E, 3Gbps, PCIe (gen1) x8 [150-175MB/s/port]
    Availability: $110 $120 $150 $150. Intel HBAs based on this chip: SASUC8I. Supermicro HBAs based on this chip: AOC-USAS-L8i AOC-USASLP-L8i (these are 2 “UIO” cards – see warning above.) LSI HBAs based on this chip: LSISAS3081E-R LSISAS3801E. Can provide 150-175MB/s/port of concurrent I/O, which is good enough for HDDs (but not SSDs). Good Linux and Solaris support. This chip is popular because it has very good Solaris support and was chosen by Sun for their second generation Sun Fire X4540 Server “Thumper”. However, beware, this chip does not support drives larger than 2TB.

    • Linux support: mptsas
    • Solaris support: mpt
    • FreeBSD support: mpt (supported at least since 7.3)
  • [SATA] Marvell 88SX6081, 3Gbps, PCI-X 64-bit 133MHz [107MB/s/port]
    Availability: $100. Supermicro HBAs based on this chip: AOC-SAT2-MV8 Based on PCI-X, which is an aging technology being replaced with PCIe. The approximate 107MB/s/port of concurrent I/O it supports is a bottleneck with modern HDDs. However this chip is especially popular because it has very good Solaris support and was chosen by Sun for their first generation Sun Fire X4500 Server “Thumper”.

    • Linux support: sata_mv (no suspend support)
    • Solaris support: marvell88sx
    • FreeBSD support: ata (supported at least since 7.0, if the hptrr driver is commented out)
  • [SAS] Marvell 88SE6485 or 88SE6480, 3Gbps, PCIe (gen1) x4 [75-88MB/s/port]
    Availability: $100. Supermicro HBAs based on this chip: AOC-SASLP-MV8. The PCIe x4 link is a bottleneck for 8 drives, restricting the concurrent I/O to 75-88MB/s/port. A better and slightly more expensive alternative is the LSI SAS1068E.

4 ports

  • [SAS] LSI SAS2004, 6Gbps, PCIe (gen2) x4 [300-350MB/s/port]
    Availability: $160. LSI HBA based on this chip: LSISAS9211-4i. Quite expensive; I would recommend buying a (cheaper!) 8-port controller.

  • [SAS] LSI SAS1064E, 3Gbps, PCIe (gen1) x8 [300-350MB/s/port]
    Availability: $120 $130. Intel HBA based on this chip: SASWT4I. [Update 2010-10-27: LSI HBA based on this chip: LSISAS3041E-R.] It is quite expensive. [Update 2014-12-04: And it does not support drives larger than 2TB.] For these reasons, I recommend instead buying a cheaper 8-port controller.

    • Linux support: mptsas
    • Solaris support: mpt
    • FreeBSD support: mpt (supported at least since 7.3)
  • [SAS] Marvell 88SE6445 or 88SE6440, 3Gbps, PCIe (gen1) x4 [150-175MB/s/port]
    Availability: $80. Areca HBA based on the 6440: ARC-1300. Adaptec HBA based on the 6440: ASC-1045/1405. Provides good bandwidth at a decent price.

    • Linux support: mvsas (6445: 2.6.25 or 2.6.31 ?, 6440: 2.6.25+, ARC-1300: 2.6.32+)
    • Solaris support: not supported (see 88SE6480)
  • [SATA] Marvell 88SX7042, 3Gbps, PCIe (gen1) x4 [150-175MB/s/port]
    Availability: $70. Adaptec HBA based on this chip: AAR-1430SA. Rosewill HBA based on this chip: RC-218. This is the only 4-port SATA controller supported by Linux providing acceptable throughput to each port. [2010-05-30 update: I bought one for $50 from Newegg in October 2009. Listed at $70 when I wrote this blog. Currently out of stock and listed at $90. Its popularity is spreading…]

  • [SAS] Marvell 88SE6340, 3Gbps, PCIe (gen1) x1 [38-44MB/s/port]
    Hard to find. Only found references to this chip on Marvell’s website. Performance is low anyway (38-44MB/s/port).

    • Linux support: mvsas
    • Solaris support: not supported (see 88SE6480)
  • [SATA] Marvell 88SE6145 or 88SE6141, 3Gbps, PCIe (gen1) x1 [38-44MB/s/port]
    Hard to find. Chip seems to be mostly found on motherboards for onboard SATA. Performance is low anyway (38-44MB/s/port).

    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci

2 ports

  • [SATA] Marvell 88SE9128 or 88SE9125 or 88SE9120, 6Gbps, PCIe (gen2) x1 [150-175MB/s/port]
    Availability: $25 $35. HighPoint HBA based on this chip: Rocket 620. LyCOM HBA based on this chip: PE-115. Koutech HBA based on this chip: PESA230. This is the only 2-port chip on the market with no bottleneck caused by the PCIe link at Max_Payload_Size=128. Pretty surprising that it is being sold for such a low price.

    • Linux support: ahci
    • Solaris support: not supported [Update 2010-09-21: Despite being AHCI-compliant, this series of chips seems unsupported by Solaris according to reader comments, see below.]
    • FreeBSD support: ahci
  • [SATA] Marvell 88SE6121, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Hard to find. Chip seems to be mostly found on motherboards for onboard SATA.

    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci
  • [SATA] JMicron JMB362 or JMB363 or JMB366, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Availability: $22.

    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci
  • [SATA] SiI3132, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Availability: $20. Warning: the overall bottleneck of the PCIe link is 150-175MB/s, or 75-88MB/s/port, but the chip has a 110-120MB/s bottleneck per port. So a single SATA device on a single port cannot fully use the 150-175MB/s by itself, it will be bottlenecked at 110-120MB/s.

Finding cards based on these controller chips can be surprisingly difficult (I have had to zoom on product images on newegg.com to read the inscription on the chip before buying), hence the reason I included some links to online retailers.

For reference, the maximum practical throughputs per port I assumed have been computed with these formulas:

  • For PCIe gen2: 300-350MB/s (60-70% of 500MB/s) * pcie-link-width / number-of-ports
  • For PCIe gen1: 150-175MB/s (60-70% of 250MB/s) * pcie-link-width / number-of-ports
  • For PCI-X 64-bit 133MHz: 853MB/s (80% of 1066MB/s) / number-of-ports

To anyone building ZFS or Linux MD RAID storage servers, I recommend to first make use of all onboard AHCI ports on the motherboard. Then put any extra disks on a discrete controller, and I recommend specifically these ones:

  • For a 2-port controller: Marvell 88SE9128 or 88SE9125 or 88SE9120. I do not primarily recommend it because it is SATA 6Gbps, but because it supports PCIe gen2, which allows the controller to handle an overall throughput of at least 300-350MB/s, or 150-175MB/s/port, with a default PCIe Max_Payload_Size setting of 128 bytes. It is also fully AHCI compliant, in other words robust, well-designed, and virtually compatible with all operating systems; a notable exception is Solaris for which I recommend instead the next best controller: JMicron JMB362 or JMB363 or JMB366. The icing on the cake is that cards using these chips are inexpensive (starting from $22, or $11/port).
  • For an 8-port controller: LSI SAS1068E if you are fine with it only supporting drives up to 2TB. Controllers based on this chip can be found inexpensively (starting from $110, or $13.75/port) and are supported out of the box by many current and older Linux and Solaris versions. In fact this chip is the one that Sun used in their second generation Sun Fire X4540 Server “Thumper”. The fact that it can support up to 150-175MB/s/port due to the PCIe bottleneck with concurrent I/O on all ports is sufficient for current disks. However if you need more throughput (eg. are using SSDs), or need to use drives larger than 2TB, then go for its more expensive successor, LSI SAS2008, which supports PCI gen2, which should allow for 300-350MB/s/port before hitting the PCIe bottleneck.

By

How To: Set up Time Machine for Multiple Macs on FreeNAS (9.2.1.3)

This is copied from here

FreeNAS
is awesome. Also, FreeNAS is hard… I recently switched from a
Synology device, and while I am already appreciating the increase in
functionality and power, it’s certainly not as easy to do some basic
tasks. One of those tasks is setting up a Time Machine share where all
of my household Macs can back up. Between reading the tutorials and
giving some trial and error myself, I think I have come up with a good
solution.
And before I get started with the step by step guide, let me reiterate
one thing: Permissions, Permissions, Permissions! If you ever find
yourself banging your head against a wall because something in FreeNAS
isn’t working as you expect it to, the likely culprit is permissions.
Once you wrap your brain around them, though, things become more simple.
Hopefully this guide helps put a foundation around that.

The Default FreeNAS Home Screen

This article assumes that you have FreeNAS already up and running on
your network and that you’re able to connect to the main home screen
with your web browser. I recommend setting it a static IP, as well. Our
first step will be to create a group / user for Time Machine backups.

2

Under the “Account” section on the left, click “Groups,” and then click “Add Group.”

3

You don’t need to change the default value for the group ID, and put
something like “time-machine” for the group name. Leave everything as
default and click OK.

The next step is to create a ZFS dataset where we’re going to put the
Time Machine backups. The dataset must be on a ZFS volume. I’m assuming
you have already created a ZFS volume with your disks here, but if you
haven’t stop reading this guide and go read the FreeNAS ZFS documentation here.
If you have already created the volume, create a dataset. Datasets can
be nested inside of other datasets so I actually have one dataset called
“Backup” and inside of that one, I have one callled “Time-Machine” ~ it
really just depends on how you want things set up.

4

After you enter the name, “Time-Machine”, leave all of the default
values alone. The below screenshot shows how I have “Time-Machine”
nested inside of my Backup dataset.

5

So now we have a dataset. This is going to be where all of our Time
Machine backups get saved. The next step is the most important and the
one that has bitten me before… so don’t forget it. We need to change
the permissions on the “Time-Machine” dataset. Recall that we initially
created a group called “time-machine” – we are now going to set things
up such that any user in the “time-machine” group can write to the
“Time-Machine” dataset. Click on the “Time-Machine” dataset and then
click on the icon with a key on it to change its permissions.

6

When you click that, a permissions dialog box will pop up.

7

I chose not to change the default user owner of “root.” However,
definitely change the group owner. In the drop down box, the
“time-machine” group that we previously created should be selectable.
Click that and then make sure to have the boxes checked as I have in the
image above. We want any user in the group to have read / write /
execute privileges.

Click the “Change” button to have the new permissions take effect.
Now it’s time to create a user for the Time Machine backup. I believe it
is best to create a separate user for each computer (and I’ll explain
why at the end of the post) so just create users that reflect that
computer. For example, the user I’m creating is called
“kevinmacbookair.”

8

Once again, you navigate over to the left column to create a new
user. Leave the “User ID” field as the default. Give your username a
simple lowercase name like mine. Uncheck the box about creating a new
primary group for the user. Instead, go to the drop down list and select
“time-machine” in there. In the full name, put a descriptive name. Type
in a password, and then you’re good to go.

So what we’ve done so far is created a group called “time-machine”
which has full access to the “Time Machine” dataset. Next we added a
user that is part of the “time-machine” group. Easy! The last thing we
need to do is create an AFP (Apple Filing Protocol) share that will
broadcast this over the network so your Mac can see it. To do this,
click the “Sharing” link on the column on the far left and click the
button to create a new AFP share.

9

Name your share something you like, and then use the file browser to
make sure that the “Path” is set to the ZFS dataset that we created for
our Time Machine backups. Next, for the “Allow List” and “Read-write
Access” fields, we want to put the group that we created, “time-machine”
~ however, because it’s a group and not a user, we need to put the “@”
symbol in front of it: “@time-machine”. Next make, sure the “Time
Machine” box is checked. Finally, take a look at those check boxes of
privileges and make sure they match what’s listed above. Then click OK.
At this point, we’re done with everything on the FreeNAS system. It’s
now time to set up Time Machine on the Mac!

10

On the Mac, just open up the Time Machine preferences, and if you go
to select a disk, you should find the one we created there! It will ask
you for a username / password, and you want to make sure you enter the
machine-specific one we created in FreeNAS, not your OS X username /
password.

11

You should be golden! If you want to add more than one computer, you
don’t need to add any new AFP shares or anything like that. Just create
new users for each machine, and make sure that each user is part of the
“time-machine” group that we created earlier. The final improvement to
make this work even better would be for us to cap how much space each
computer has to back up. For example, my MacBook Air has 256GB of space,
and anything on my MacBook Air is also on my other machines so I really
wouldn’t want to give it more than 300GB of usable space for historical
backups. Time Machine will automatically delete the older ones if it
runs out of room. On the contrary, my MacBook Pro is loaded up with all
of my important data and I might want to give it 2x the space of its
SSD. Right now there isn’t a great way to do this for multiple Macs in
FreeNAS, but a feature is coming soon that will make it easy! This feature is per-user quotas. This will allow us to specify the maximum amount of space each user is allowed.

I hope this guide was useful!

By

How to rename a ZFS Volume on FreeNas

What you basically need to do is export the zpool using the shell command line (not on the web interface).

The commands a fairly simple.

zpool export [current zpool name here]

zpool import [current zpool name here][space here][new zpool name here]

Example:

zpool export volume0

zpool import volume0 volume1

Keep in mind that afterwards, at least I had to do it, to detach the volumes (zpools) from the system (DON’T CHECK THE OPTION TO ERASE DATA).

Reboot.
Auto-import volumes (zpools).
And bingo you have the volumes back with the new names and all of your data kept intact! 

More Information [Detailed]:

http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s06.html
http://doc.freenas.org/index.php/Volumes [Detach Volume Section]
http://prefetch.net/blog/index.php/2006/11/15/renaming-a-zfs-pool/
http://forums.freenas.org/threads/renaming-volume-with-data-already-in-it.13061/