itramblings

Ramblings from an IT manager and long time developer.

By

Quick computer setup using Chocolatey

Here is a quick script for setting up a new computer using Chocolatey

@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

cinst -y 7zip
cinst -y notepadplusplus
cinst -y notepadreplacer
cinst -y javaruntime
cinst -y spacesniffer
cinst -y sysinternals
cinst -y firefox
cinst -y googlechrome
cinst -y vcredist2010 
cinst -y vcredist2012 
cinst -y vcredist2015 
cinst -y silverlight
cinst -y mremoteng

 

By

Ramblings on blockchain

Some thoughts on what could be done with blockchain

  • Supply Chain/Inventory Validation
    • Blockchain opens up the possibility of building a “trusted supply chain” that is not just dependent on partners reporting in a reactive manner.  Through the use of blockchain it is now possible to validate the who, what, when, where, and how many in a way that cannot be forged.  Consider the idea of recording entry/exit of a “group of items” along with their unique identifiers in a blockchain ledger.  The idea being that now here is an immutable record that traces things down to the item level that can be validated at any time along the way without relying on specific partners data systems.  This could have a huge image on fraud, product loss, “accidental redirection” of inventory, etc.
  • Embedded Software Validation
    • Consider the recent IOT attach on DynDns that took out DNS on the east coast for a few hours.  This could have been prevented if there was a way for embedded systems to not only validate that individual components (dlls, code, etc) are valid but also that the integrity of the entire system state is in a valid state (not compromised).  This could be done by setting up a blockchain that stores enough information for an embedded system to validate that all components are a state that the provide considers “valid” and “update to date”.
  • Intellectual Collaboration
    • Today more and more companies are working together to bring new products to market and are seeing a lot of value from these partnerships.  The hard part is keeping track of who contributed what and who had the first real substantive idea that lead to a product/solution.  Blockchain could be used to setup an “ideation and collaboration” ledger system that would in turn establish a “trusted system of record” for the entities involved in the collaboration.
      Note: This could just as easily be used internally.
  • Licensing/Royalties
    • Blockchain opens up the potential for a self-enforcing contract that can be validated real-time in an automated way.  Consider the music industry (or any industry that licensing something for that matter).  Today these contracts are managed by intermediaries, manually maintained, and “audited” for compliance and payment.  With blockchain you have the ability to not only record a “purchase”, you can also record the “terms” associated with the purchase.  This is then something that can be validated at any point in time allowing systems to be designed to easily and cost effectively report usage (and potentially track incorrect usage through thumbprints).

At the end of the day most of these ideas come down to the idea of leveraging blockchain for managing “contracts” in a way that can be automated and validated in real-time and in a standard and cost effective manner.

By

Freenas and shell commands for restarting AD connection

Here is a quick set of shell commands to reset and enable AD on Freenas 9.x

service ix-kerberos stop
service ix-nsswitch stop
service ix-kinit stop
service ix-activedirectory stop
service ix-pam stop
service ix-cache stop

sqlite3 /data/freenas-v1.db "update directoryservice_activedirectory set ad_enable=1;"
echo $?
service ix-kerberos start
service ix-nsswitch start
service ix-kinit start
service ix-kinit status
echo $?
klist

python /usr/local/www/freenasUI/middleware/notifier.py start cifs
service ix-activedirectory start
service ix-activedirectory status
echo $?
python /usr/local/www/freenasUI/middleware/notifier.py restart cifs
service ix-pam start
service ix-cache start &

 

By

Throttling asynchronous tasks

This is copied from this Stack Overflow Post

I would like to run a bunch of async tasks, with a limit on how many tasks may be pending completion at any given time.

Say you have 1000 URLs, and you only want to have 50 requests open at a time; but as soon as one request completes, you open up a connection to the next URL in the list. That way, there are always exactly 50 connections open at a time, until the URL list is exhausted.

I also want to utilize a given number of threads if possible.

I came up with an extension method, ThrottleTasksAsync that does what I want. Is there a simpler solution already out there? I would assume that this is a common scenario.

Usage:

class Program
{
    static void Main(string[] args)
    {
        Enumerable.Range(1, 10).ThrottleTasksAsync(5, 2, async i => { Console.WriteLine(i); return i; }).Wait();

        Console.WriteLine("Press a key to exit...");
        Console.ReadKey(true);
    }
}

Here is the code:

public static class EnumerableExtensions
{
    public static async Task<Result_T[]> ThrottleTasksAsync<Enumerable_T, Result_T>(this IEnumerable<Enumerable_T> enumerable, int maxConcurrentTasks, int maxDegreeOfParallelism, Func<Enumerable_T, Task<Result_T>> taskToRun)
    {
        var blockingQueue = new BlockingCollection<Enumerable_T>(new ConcurrentBag<Enumerable_T>());

        var semaphore = new SemaphoreSlim(maxConcurrentTasks);

        // Run the throttler on a separate thread.
        var t = Task.Run(() =>
        {
            foreach (var item in enumerable)
            {
                // Wait for the semaphore
                semaphore.Wait();
                blockingQueue.Add(item);
            }

            blockingQueue.CompleteAdding();
        });

        var taskList = new List<Task<Result_T>>();

        Parallel.ForEach(IterateUntilTrue(() => blockingQueue.IsCompleted), new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism },
        _ =>
        {
            Enumerable_T item;

            if (blockingQueue.TryTake(out item, 100))
            {
                taskList.Add(
                    // Run the task
                    taskToRun(item)
                    .ContinueWith(tsk =>
                        {
                            // For effect
                            Thread.Sleep(2000);

                            // Release the semaphore
                            semaphore.Release();

                            return tsk.Result;
                        }
                    )
                );
            }
        });

        // Await all the tasks.
        return await Task.WhenAll(taskList);
    }

    static IEnumerable<bool> IterateUntilTrue(Func<bool> condition)
    {
        while (!condition()) yield return true;
    }
}

The method utilizes BlockingCollection and SemaphoreSlim to make it work. The throttler is run on one thread, and all the async tasks are run on the other thread. To achieve parallelism, I added a maxDegreeOfParallelism parameter that’s passed to a Parallel.ForEach loop re-purposed as a while loop.

Bonus: To get around the problem in BlockingCollection where an exception is thrown in Take() when CompleteAdding() is called, I’m using the TryTake overload with a timeout. If I didn’t use the timeout in TryTake, it would defeat the purpose of using a BlockingCollection since TryTake won’t block. Is there a better way? Ideally, there would be a TakeAsync method.

By

From 32 to 2 ports: Ideal SATA/SAS Controllers for ZFS & Linux MD RAID

Absolutely fantastic post on controllers for home storage systems can be found here

I need a lot of reliable and cheap storage space (media collection, backups). Hardware RAID tends to be expensive and clunky. I recognize quite a few advantages in ZFS on Solaris/FreeBSD, and Linux MD RAID:

  • Performance. In many cases they are as fast as hardware RAID, and sometimes faster because the OS is aware of the RAID layout and can optimize I/O patterns for it. Indeed, even the most compute intensive RAID5 or 6 parity calculations take negligible CPU time on a modern processor. For a concrete example, Linux 2.6.32 on a Phenom II X4 945 3.0GHz computes RAID6 parity at close to 8 GB/s on a single core (check dmesg: “raid6: using algorithm sse2x4 (7976 MB/s)”). So achieving a throughput of 500 MB/s on a Linux MD raid6 array requires spending less than 1.5% CPU time computing parity. Now regarding the optimized I/O patterns, here is an interesting anecdote: one of the steps that Youtube took in its early days to scale their infrastructure up was to switch from hardware RAID to software RAID on their database server. They noticed a 20-30% increase in I/O throughput. Watch Seattle Conference on Scalability: YouTube Scalability @ 34’50”.
  • Scalability. ZFS and Linux MD RAID allow building arrays across multiple disk controllers, or multiple SAN devices, alleviating throughput bottlenecks that can arise on PCIe links, or GbE links. Whereas hardware RAID is restricted to a single controller, with no room for expansion.
  • Reliability. No hardware RAID = one less hardware component that can fail.
  • Ease of recoverability. The data can be recovered by putting the disks in any server. There is no reliance on a particular model of RAID controller.
  • Flexibility. It is possible to create arrays on any disk on any type of controller in the system, or to move disks from one controller to another.
  • Ease of administration. There is only one software interface to learn: zpool(1M) or mdadm(8). No need to install proprietary vendor tools, or to reboot into BIOSes to manage arrays.
  • Cost. Obviously cheaper since there is no hardware RAID controller to buy.

Consequently, many ZFS and Linux MD RAID users, such as me, look for non-RAID controllers that are simply reliable, fast, cheap, and otherwise come with no bells and whistles. Most motherboards have up to 4 or 6 onboard ports (be sure to always enable AHCI mode in the BIOS as it is the best designed hardware interface that a chip can present to the OS to enable maximum performance), but for more than 4 or 6 disks, there are surprisingly not that many choices of controllers. Over the years, I have spent quite some time on the controllers manufacturers’ websites, the LKML, linux-ide and ZFS mailing lists, and have established a list of SATA/SAS controllers that are ideal for ZFS or Linux MD RAID. I also included links to online retailers because some of these controllers are not that easy to find online.

The reason the list contains SAS controllers is because they are just as good as an option as SATA controllers: many of them are as inexpensive as SATA controllers (even though they target the enterprise market), they are fully compatible with SATA 3Gbps and 6Gbps disks, and they support all the usual features: hotplug, queueing, etc. A SAS controller typically present SFF-8087 connectors, also known as internal mini SAS, or even iPASS connectors. Up to 4 SATA drives can be connected to such a connector with an SFF-8087 to 4xSATA forward breakout cable (as opposed to reverse breakout). This type of cable usually sells for $15-30. Here are a few links if you have trouble finding them.

There are really only 4 significant manufacturers of discrete non-RAID SATA/SAS controller chips on the market: LSI, Marvell, JMicron, and Silicon Image. Controller cards from Adaptec, Areca, HighPoint, Intel, Supermicro, Tyan, etc, most often use chips from one of these 4 manufacturers.

Here is my list of non-RAID SATA/SAS controllers, from 16-port to 2-port controllers, with the kernel driver used to support them under Linux, and Solaris. There is also limited information on FreeBSD support. I focused on native PCIe controllers only, with very few PCI-X (actually only 1 very popular: 88SX6081). The MB/s/port number in square brackets indicates the maximum practical throughput that can be expected from each SATA port, assuming concurrent I/O on all ports, given the bottleneck of the host link or bus (PCIe or PCI-X). I assumed for all PCIe controllers that only 60-70% of the maximum theoretical PCIe throughput can be achieved, and for all PCI-X controllers that only 80% of the maximum theoretical PCI-X throughput can be achieved on this bus. These assumptions concur with what I have seen in real world benchmarks assuming a Max_Payload_Size setting of either 128 or 256 bytes for PCIe (a common default value), and a more or less default PCI latency timer setting for PCI-X. As of May 2010, modern disks can easily reach 120-130MB/s of sequential throughput at the beginning of the platter, so avoid controllers with a throughput of less than 150MB/s/port if you want to reduce the possibility of bottlenecks to zero.

32 ports

  • [SAS] 4 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [150-175MB/s/port]
    [Update 2011-09-29: Availability: $850 $850. This is a HighPoint HBA combining 4 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2782.]

    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

24 ports

  • [SAS] 3 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [200-233MB/s/port]
    [Update 2011-09-29: Availability: $540 $620. This is a HighPoint HBA combining 3 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2760A.]

    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

16 ports

  • [SAS] LSI SAS2116, 6Gbps, PCIe (gen2) x8 [150-175MB/s/port]
    Availability: $400 $510. LSI HBA based on this chip: LSISAS9200-16e, LSISAS9201-16i. [Update 2010-10-27: only the model with external ports used to be available but now the one with internal ports is available and less expensive.]

  • [SAS] 2 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [300-350MB/s/port]
    [Update 2011-09-29: Availability: $450 $480. This is a HighPoint HBA combining 3 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2740 and 2744.]

    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

8 ports

  • [SAS] Marvell 88SE9485 or 88SE9480, 6Gbps, PCIe (gen2) x8 [300-350MB/s/port]
    Availability: $280. [Update 2011-07-01: Supermicro HBA based on this chip: AOC-SAS2LP-MV8]. Areca HBA based on the 9480: ARC-1320. HighPoint HBA based on the 9485: RocketRAID 272x. Lots of bandwidth available to each port. However it is currently not supported by Solaris. I would recommend the LSI SAS2008 instead, which is cheaper, better supported, and provides just as much bandwidth.

    • Linux support: mvsas (94xx: 2.6.31+, ARC-1320: 2.6.32+)
    • Solaris support: not supported (see 88SE6480)
    • Mac OS X support: [Update 2014-06-26: the only 88SE9485 or 88SE9480 HBAs supported by Mountain Lion (10.8) and up seem to be HighPoint HBAs]
  • [SAS] LSI SAS2008, 6Gbps, PCIe (gen2) x8 [300-350MB/s/port]
    Availability: $130 $140 $180 $220 $220 $230 $240 $290. [Update 2010-12-21: Intel HBA based on this chip: RS2WC080]. Supermicro HBAs based on this chip: AOC-USAS2-L8i AOC-USAS2-L8e (these are 2 “UIO” cards with the electronic components mounted on the other side of the PCB which may not be mechanically compatible with all chassis). LSI HBAs based on this chip: LSISAS9200-8e LSISAS9210-8i LSISAS9211-8i LSISAS9212-4i4e. Lots of bandwidth per port. Good Linux and Solaris support.

  • [SAS] LSI SAS1068E, 3Gbps, PCIe (gen1) x8 [150-175MB/s/port]
    Availability: $110 $120 $150 $150. Intel HBAs based on this chip: SASUC8I. Supermicro HBAs based on this chip: AOC-USAS-L8i AOC-USASLP-L8i (these are 2 “UIO” cards – see warning above.) LSI HBAs based on this chip: LSISAS3081E-R LSISAS3801E. Can provide 150-175MB/s/port of concurrent I/O, which is good enough for HDDs (but not SSDs). Good Linux and Solaris support. This chip is popular because it has very good Solaris support and was chosen by Sun for their second generation Sun Fire X4540 Server “Thumper”. However, beware, this chip does not support drives larger than 2TB.

    • Linux support: mptsas
    • Solaris support: mpt
    • FreeBSD support: mpt (supported at least since 7.3)
  • [SATA] Marvell 88SX6081, 3Gbps, PCI-X 64-bit 133MHz [107MB/s/port]
    Availability: $100. Supermicro HBAs based on this chip: AOC-SAT2-MV8 Based on PCI-X, which is an aging technology being replaced with PCIe. The approximate 107MB/s/port of concurrent I/O it supports is a bottleneck with modern HDDs. However this chip is especially popular because it has very good Solaris support and was chosen by Sun for their first generation Sun Fire X4500 Server “Thumper”.

    • Linux support: sata_mv (no suspend support)
    • Solaris support: marvell88sx
    • FreeBSD support: ata (supported at least since 7.0, if the hptrr driver is commented out)
  • [SAS] Marvell 88SE6485 or 88SE6480, 3Gbps, PCIe (gen1) x4 [75-88MB/s/port]
    Availability: $100. Supermicro HBAs based on this chip: AOC-SASLP-MV8. The PCIe x4 link is a bottleneck for 8 drives, restricting the concurrent I/O to 75-88MB/s/port. A better and slightly more expensive alternative is the LSI SAS1068E.

4 ports

  • [SAS] LSI SAS2004, 6Gbps, PCIe (gen2) x4 [300-350MB/s/port]
    Availability: $160. LSI HBA based on this chip: LSISAS9211-4i. Quite expensive; I would recommend buying a (cheaper!) 8-port controller.

  • [SAS] LSI SAS1064E, 3Gbps, PCIe (gen1) x8 [300-350MB/s/port]
    Availability: $120 $130. Intel HBA based on this chip: SASWT4I. [Update 2010-10-27: LSI HBA based on this chip: LSISAS3041E-R.] It is quite expensive. [Update 2014-12-04: And it does not support drives larger than 2TB.] For these reasons, I recommend instead buying a cheaper 8-port controller.

    • Linux support: mptsas
    • Solaris support: mpt
    • FreeBSD support: mpt (supported at least since 7.3)
  • [SAS] Marvell 88SE6445 or 88SE6440, 3Gbps, PCIe (gen1) x4 [150-175MB/s/port]
    Availability: $80. Areca HBA based on the 6440: ARC-1300. Adaptec HBA based on the 6440: ASC-1045/1405. Provides good bandwidth at a decent price.

    • Linux support: mvsas (6445: 2.6.25 or 2.6.31 ?, 6440: 2.6.25+, ARC-1300: 2.6.32+)
    • Solaris support: not supported (see 88SE6480)
  • [SATA] Marvell 88SX7042, 3Gbps, PCIe (gen1) x4 [150-175MB/s/port]
    Availability: $70. Adaptec HBA based on this chip: AAR-1430SA. Rosewill HBA based on this chip: RC-218. This is the only 4-port SATA controller supported by Linux providing acceptable throughput to each port. [2010-05-30 update: I bought one for $50 from Newegg in October 2009. Listed at $70 when I wrote this blog. Currently out of stock and listed at $90. Its popularity is spreading…]

  • [SAS] Marvell 88SE6340, 3Gbps, PCIe (gen1) x1 [38-44MB/s/port]
    Hard to find. Only found references to this chip on Marvell’s website. Performance is low anyway (38-44MB/s/port).

    • Linux support: mvsas
    • Solaris support: not supported (see 88SE6480)
  • [SATA] Marvell 88SE6145 or 88SE6141, 3Gbps, PCIe (gen1) x1 [38-44MB/s/port]
    Hard to find. Chip seems to be mostly found on motherboards for onboard SATA. Performance is low anyway (38-44MB/s/port).

    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci

2 ports

  • [SATA] Marvell 88SE9128 or 88SE9125 or 88SE9120, 6Gbps, PCIe (gen2) x1 [150-175MB/s/port]
    Availability: $25 $35. HighPoint HBA based on this chip: Rocket 620. LyCOM HBA based on this chip: PE-115. Koutech HBA based on this chip: PESA230. This is the only 2-port chip on the market with no bottleneck caused by the PCIe link at Max_Payload_Size=128. Pretty surprising that it is being sold for such a low price.

    • Linux support: ahci
    • Solaris support: not supported [Update 2010-09-21: Despite being AHCI-compliant, this series of chips seems unsupported by Solaris according to reader comments, see below.]
    • FreeBSD support: ahci
  • [SATA] Marvell 88SE6121, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Hard to find. Chip seems to be mostly found on motherboards for onboard SATA.

    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci
  • [SATA] JMicron JMB362 or JMB363 or JMB366, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Availability: $22.

    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci
  • [SATA] SiI3132, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Availability: $20. Warning: the overall bottleneck of the PCIe link is 150-175MB/s, or 75-88MB/s/port, but the chip has a 110-120MB/s bottleneck per port. So a single SATA device on a single port cannot fully use the 150-175MB/s by itself, it will be bottlenecked at 110-120MB/s.

Finding cards based on these controller chips can be surprisingly difficult (I have had to zoom on product images on newegg.com to read the inscription on the chip before buying), hence the reason I included some links to online retailers.

For reference, the maximum practical throughputs per port I assumed have been computed with these formulas:

  • For PCIe gen2: 300-350MB/s (60-70% of 500MB/s) * pcie-link-width / number-of-ports
  • For PCIe gen1: 150-175MB/s (60-70% of 250MB/s) * pcie-link-width / number-of-ports
  • For PCI-X 64-bit 133MHz: 853MB/s (80% of 1066MB/s) / number-of-ports

To anyone building ZFS or Linux MD RAID storage servers, I recommend to first make use of all onboard AHCI ports on the motherboard. Then put any extra disks on a discrete controller, and I recommend specifically these ones:

  • For a 2-port controller: Marvell 88SE9128 or 88SE9125 or 88SE9120. I do not primarily recommend it because it is SATA 6Gbps, but because it supports PCIe gen2, which allows the controller to handle an overall throughput of at least 300-350MB/s, or 150-175MB/s/port, with a default PCIe Max_Payload_Size setting of 128 bytes. It is also fully AHCI compliant, in other words robust, well-designed, and virtually compatible with all operating systems; a notable exception is Solaris for which I recommend instead the next best controller: JMicron JMB362 or JMB363 or JMB366. The icing on the cake is that cards using these chips are inexpensive (starting from $22, or $11/port).
  • For an 8-port controller: LSI SAS1068E if you are fine with it only supporting drives up to 2TB. Controllers based on this chip can be found inexpensively (starting from $110, or $13.75/port) and are supported out of the box by many current and older Linux and Solaris versions. In fact this chip is the one that Sun used in their second generation Sun Fire X4540 Server “Thumper”. The fact that it can support up to 150-175MB/s/port due to the PCIe bottleneck with concurrent I/O on all ports is sufficient for current disks. However if you need more throughput (eg. are using SSDs), or need to use drives larger than 2TB, then go for its more expensive successor, LSI SAS2008, which supports PCI gen2, which should allow for 300-350MB/s/port before hitting the PCIe bottleneck.

By

Register Linux (Ubuntu) server with Windows DNS

If you are looking for a script to help register your linux box with a windows DNS sever than you have come to the right place.

Note: For this to work you need to have enabled Windows DNS to allow unsecure updates

Note: This script ASSUMES that your /etc/network/interfaces is setup for static IP and has dns-nameservers and dns-search setup

Example:

# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.1.22
        netmask 255.255.255.0
        network 192.168.1.0
        broadcast 192.168.1.255
        gateway 192.168.1.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 192.168.1.10 #my Windows DNS server
        dns-search corp.local # my DNS zone that I want to update

Step 1: create a folder calls /var/scripts

sudo vi /var/scripts

Step 2: create a script file called dns-update.sh

sudo vi /var/scripts/dns-update.sh

Step 3: paste the following into the script file

#!/bin/sh
ADDR=`/sbin/ifconfig eth0 | grep 'inet addr' | awk '{print $2}' | sed -e s/.*://`
HOST=`hostname -f`
DNSSERVER=`grep '[^#]dns-nameservers' /etc/network/interfaces | awk '{print $2}' | head -1`
DNSZONE=`grep '[^#]dns-search' /etc/network/interfaces | awk '{print $2}' | head -1`
echo "server $DNSSERVER" > /var/scripts/nsupdate.txt
echo "zone $DNSZONE" >> /var/scripts/nsupdate.txt
echo "update delete $HOST A" >> /var/scripts/nsupdate.txt
echo "update add $HOST 600 A $ADDR" >> /var/scripts/nsupdate.txt
echo "show" >> /var/scripts/nsupdate.txt
echo "send" >> /var/scripts/nsupdate.txt
nsupdate /var/scripts/nsupdate.txt

Step 4: Enable the script for execution

sudo chmod +x /var/scripts/dns-update.sh

Step 5: Run the script (or schedule it via cron)

sudo /var/scripts/dns-update.sh1

 

By

Pretty good backup script for linux folders

This was originally taken from here with some modifications by me

Automating backups with tar

It is always interesting to automate the tasks of a backup. Automation offers enormous opportunities for using your Linux server to achieve the goals you set. The following example below is our backup script, called backup.cron. This script is designed to run on any computer by changing only the five variables:

  1. COMPUTER
  2. DIRECTORIES
  3. BACKUPDIR
  4. TIMEDIR
  5. BACKUPSET

We suggest that you set this script up and run it at the beginning of the month for the first time, and then run it for a month before making major changes. In our example below we do the backup to a directory on the local server BACKUPDIR, but you could modify this script to do it to a tape on the local server or via an NFS mounted file system.

  1. Create the backup script backup.cron file, touch /etc/cron.daily/backup.cron and add the following lines to this backup file:
    #!/bin/sh
    # full and incremental backup script
    # created 07 February 2000
    # Based on a script by Daniel O'Callaghan <danny@freebsd.org>
    # and modified by Gerhard Mourani <gmourani@videotron.ca>
    # and modified by Shawn Anderson <sanderson@eye-catcher.com> on 2016-08-14
    #Change the 5 variables below to fit your computer/backup
    
    COMPUTER=$(hostname) # name of this computer
    BACKUPSET=HOMEDIR # name of the backup set
    DIRECTORIES="/home" # directories to backup
    BACKUPDIR=/backups # where to store the backups
    TIMEDIR=/backups/last-full # where to store time of full backup
    TAR=/bin/tar # name and location of tar
    
    #You should not have to change anything below here
    PATH=/usr/local/bin:/usr/bin:/bin
    DOW=`date +%a` # Day of the week e.g. Mon
    DOM=`date +%d` # Date of the Month e.g. 27
    DM=`date +%d%b` # Date and Month e.g. 27Sep
    
    #Set various things up
    
    # Is PV installed?
    type pv &gt;/dev/null 2>&1 || sudo apt-get install pv
    
    # Do the required paths exist
    if [ ! -d $BACKUPDIR ]; then
       mkdir $BACKUPDIR
    fi
    
    if [ ! -d $TIMEDIR ]; then
       mkdir $TIMEDIR
    fi
    
    # On the 1st of the month a permanent full backup is made
    # Every Sunday a full backup is made - overwriting last Sundays backup
    # The rest of the time an incremental backup is made. Each incremental
    # backup overwrites last weeks incremental backup of the same name.
    #
    # if NEWER = "", then tar backs up all files in the directories
    # otherwise it backs up files newer than the NEWER date. NEWER
    # gets it date from the file written every Sunday.
    
    # Monthly full backup
    if [ $DOM = "01" ]; then
       NEWER=""
       $TAR $NEWER cf - -C $DIRECTORIES/* | pv -s $(du -sb $DIRECTORIES | awk '{print $1}') | gzip > $BACKUPDIR/$BACKUPSET-$COMPUTER-$DM.tgz
    fi
    
    # Weekly full backup
    if [ $DOW = "Sun" ]; then
       NEWER=""
       NOW=`date +%d-%b`
    
       # Update full backup date
       echo $NOW &gt; $TIMEDIR/$COMPUTER-full-date
       $TAR $NEWER cf - -C $DIRECTORIES/* | pv -s $(du -sb $DIRECTORIES | awk '{print $1}') | gzip > $BACKUPDIR/$BACKUPSET-$COMPUTER-$DOW.tgz
    
    # Make incremental backup - overwrite last weeks
    else
       # Get date of last full backup
       NEWER="--newer `cat $TIMEDIR/$COMPUTER-full-date`"
       $TAR $NEWER cf - -C $DIRECTORIES/* | pv -s $(du -sb $DIRECTORIES | awk '{print $1}') | gzip > $BACKUPDIR/$BACKUPSET-$COMPUTER-$DOW.tgz
    fi
    
    # Remove backup files older than 90 days (this really shouldn't be necessary unless something
    # isn't right with the auto-rotation. I have it in just for good measures
    find $BACKUPDIR/$BACKUPSET-$COMPUTER* -mtime +90 -exec rm {} \;
    Example 33-1. Backup directory of a week
    Here is an abbreviated look of the backup directory after one week:

    total 22217
    -rw-r--r-- 1 root root 10731288 Feb 7 11:24 deep-HOMEDIR-01Feb.<b class="command">tar</b>
    -rw-r--r-- 1 root root 6879 Feb 7 11:24 deep-HOMEDIR-Fri.<b class="command">tar</b>
    -rw-r--r-- 1 root root 2831 Feb 7 11:24 deep-HOMEDIR-Mon.<b class="command">tar</b>
    -rw-r--r-- 1 root root 7924 Feb 7 11:25 deep-HOMEDIR-Sat.<b class="command">tar</b>
    -rw-r--r-- 1 root root 11923013 Feb 7 11:24 deep-HOMEDIR-Sun.<b class="command">tar</b>
    -rw-r--r-- 1 root root 5643 Feb 7 11:25 deep-HOMEDIR-Thu.<b class="command">tar</b>
    -rw-r--r-- 1 root root 3152 Feb 7 11:25 deep-HOMEDIR-Tue.<b class="command">tar</b>
    -rw-r--r-- 1 root root 4567 Feb 7 11:25 deep-HOMEDIR-Wed.<b class="command">tar</b>
    drwxr-xr-x 2 root root 1024 Feb 7 11:20 last-full
    

    Important: The directory where to store the backups BACKUPDIR, and the directory where to store time of full backup TIMEDIR must exist or be created before the use of the backup-script, or you will receive an error message.

  2. If you are not running this backup script from the beginning of the month 01-month-year, the incremental backups will need the time of the Sunday backup to be able to work properly. If you start in the middle of the week, you will need to create the time file in the TIMEDIR. To create the time file in the TIMEDIR directory, use the following command:
    [root@deep] /# date +%d%b < /backups/last-full/myserver-full-date

    Where /backups/last-full is our variable TIMEDIR wherein we want to store the time of the full backup, and myserver-full-date is the name of our server e.g. deep, and our time file consists of a single line with the present date i.e. 15-Feb.

  3. Make this script executable and change its default permissions to be writable only by the super-user root 755.
    [root@deep] /# chmod 755 /etc/cron.daily/backup.cron

Because this script is in the /etc/cron.daily directory, it will be automatically run as a cron job at one o’clock in the morning every day.

By

Async/Await – Best Practices in Asynchronous Programming

By Stephen Cleary | March 2013 (repost from here)

These days there’s a wealth of information about the new async and await support in the Microsoft .NET Framework 4.5. This article is intended as a “second step” in learning asynchronous programming; I assume that you’ve read at least one introductory article about it. This article presents nothing new, as the same advice can be found online in sources such as Stack Overflow, MSDN forums and the async/await FAQ. This article just highlights a few best practices that can get lost in the avalanche of available documentation.

The best practices in this article are more what you’d call “guidelines” than actual rules. There are exceptions to each of these guidelines. I’ll explain the reasoning behind each guideline so that it’s clear when it does and does not apply. The guidelines are summarized in Figure 1; I’ll discuss each in the following sections.

Figure 1 Summary of Asynchronous Programming Guidelines

Name Description Exceptions
Avoid async void Prefer async Task methods over async void methods Event handlers
Async all the way Don’t mix blocking and async code Console main method
Configure context Use ConfigureAwait(false) when you can Methods that require con­text

Avoid Async Void

There are three possible return types for async methods: Task, Task<T> and void, but the natural return types for async methods are just Task and Task<T>. When converting from synchronous to asynchronous code, any method returning a type T becomes an async method returning Task<T>, and any method returning void becomes an async method returning Task. The following code snippet illustrates a synchronous void-returning method and its asynchronous equivalent:

void MyMethod()
{
  // Do synchronous work.
  Thread.Sleep(1000);
}
async Task MyMethodAsync()
{
  // Do asynchronous work.
  await Task.Delay(1000);
}

Void-returning async methods have a specific purpose: to make asynchronous event handlers possible. It is possible to have an event handler that returns some actual type, but that doesn’t work well with the language; invoking an event handler that returns a type is very awkward, and the notion of an event handler actually returning something doesn’t make much sense. Event handlers naturally return void, so async methods return void so that you can have an asynchronous event handler. However, some semantics of an async void method are subtly different than the semantics of an async Task or async Task<T> method.

Async void methods have different error-handling semantics. When an exception is thrown out of an async Task or async Task<T> method, that exception is captured and placed on the Task object. With async void methods, there is no Task object, so any exceptions thrown out of an async void method will be raised directly on the SynchronizationContext that was active when the async void method started. Figure 2 illustrates that exceptions thrown from async void methods can’t be caught naturally.

Figure 2 Exceptions from an Async Void Method Can’t Be Caught with Catch
private async void ThrowExceptionAsync()
{
  throw new InvalidOperationException();
}
public void AsyncVoidExceptions_CannotBeCaughtByCatch()
{
  try
  {
    ThrowExceptionAsync();
  }
  catch (Exception)
  {
    // The exception is never caught here!
    throw;
  }
}

These exceptions can be observed using AppDomain.UnhandledException or a similar catch-all event for GUI/ASP.NET applications, but using those events for regular exception handling is a recipe for unmaintainability.

Async void methods have different composing semantics. Async methods returning Task or Task<T> can be easily composed using await, Task.WhenAny, Task.WhenAll and so on. Async methods returning void don’t provide an easy way to notify the calling code that they’ve completed. It’s easy to start several async void methods, but it’s not easy to determine when they’ve finished. Async void methods will notify their SynchronizationContext when they start and finish, but a custom SynchronizationContext is a complex solution for regular application code.

Async void methods are difficult to test. Because of the differences in error handling and composing, it’s difficult to write unit tests that call async void methods. The MSTest asynchronous testing support only works for async methods returning Task or Task<T>. It’s possible to install a SynchronizationContext that detects when all async void methods have completed and collects any exceptions, but it’s much easier to just make the async void methods return Task instead.

It’s clear that async void methods have several disadvantages compared to async Task methods, but they’re quite useful in one particular case: asynchronous event handlers. The differences in semantics make sense for asynchronous event handlers. They raise their exceptions directly on the SynchronizationContext, which is similar to how synchronous event handlers behave. Synchronous event handlers are usually private, so they can’t be composed or directly tested. An approach I like to take is to minimize the code in my asynchronous event handler—for example, have it await an async Task method that contains the actual logic. The following code illustrates this approach, using async void methods for event handlers without sacrificing testability:

private async void button1_Click(object sender, EventArgs e)
{
  await Button1ClickAsync();
}
public async Task Button1ClickAsync()
{
  // Do asynchronous work.
  await Task.Delay(1000);
}

Async void methods can wreak havoc if the caller isn’t expecting them to be async. When the return type is Task, the caller knows it’s dealing with a future operation; when the return type is void, the caller might assume the method is complete by the time it returns. This problem can crop up in many unexpected ways. It’s usually wrong to provide an async implementation (or override) of a void-returning method on an interface (or base class). Some events also assume that their handlers are complete when they return. One subtle trap is passing an async lambda to a method taking an Action parameter; in this case, the async lambda returns void and inherits all the problems of async void methods. As a general rule, async lambdas should only be used if they’re converted to a delegate type that returns Task (for example, Func<Task>).

To summarize this first guideline, you should prefer async Task to async void. Async Task methods enable easier error-handling, composability and testability. The exception to this guideline is asynchronous event handlers, which must return void. This exception includes methods that are logically event handlers even if they’re not literally event handlers (for example, ICommand.Execute implementations).

Async All the Way

Asynchronous code reminds me of the story of a fellow who mentioned that the world was suspended in space and was immediately challenged by an elderly lady claiming that the world rested on the back of a giant turtle. When the man enquired what the turtle was standing on, the lady replied, “You’re very clever, young man, but it’s turtles all the way down!” As you convert synchronous code to asynchronous code, you’ll find that it works best if asynchronous code calls and is called by other asynchronous code—all the way down (or “up,” if you prefer). Others have also noticed the spreading behavior of asynchronous programming and have called it “contagious” or compared it to a zombie virus. Whether turtles or zombies, it’s definitely true that asynchronous code tends to drive surrounding code to also be asynchronous. This behavior is inherent in all types of asynchronous programming, not just the new async/await keywords.

“Async all the way” means that you shouldn’t mix synchronous and asynchronous code without carefully considering the consequences. In particular, it’s usually a bad idea to block on async code by calling Task.Wait or Task.Result. This is an especially common problem for programmers who are “dipping their toes” into asynchronous programming, converting just a small part of their application and wrapping it in a synchronous API so the rest of the application is isolated from the changes. Unfortunately, they run into problems with deadlocks. After answering many async-related questions on the MSDN forums, Stack Overflow and e-mail, I can say this is by far the most-asked question by async newcomers once they learn the basics: “Why does my partially async code deadlock?”

Figure 3 shows a simple example where one method blocks on the result of an async method. This code will work just fine in a console application but will deadlock when called from a GUI or ASP.NET context. This behavior can be confusing, especially considering that stepping through the debugger implies that it’s the await that never completes. The actual cause of the deadlock is further up the call stack when Task.Wait is called.

Figure 3 A Common Deadlock Problem When Blocking on Async Code
public static class DeadlockDemo
{
  private static async Task DelayAsync()
  {
    await Task.Delay(1000);
  }
  // This method causes a deadlock when called in a GUI or ASP.NET context.
  public static void Test()
  {
    // Start the delay.
    var delayTask = DelayAsync();
    // Wait for the delay to complete.
    delayTask.Wait();
  }
}

The root cause of this deadlock is due to the way await handles contexts. By default, when an incomplete Task is awaited, the current “context” is captured and used to resume the method when the Task completes. This “context” is the current SynchronizationContext unless it’s null, in which case it’s the current TaskScheduler. GUI and ASP.NET applications have a SynchronizationContext that permits only one chunk of code to run at a time. When the await completes, it attempts to execute the remainder of the async method within the captured context. But that context already has a thread in it, which is (synchronously) waiting for the async method to complete. They’re each waiting for the other, causing a deadlock.

Note that console applications don’t cause this deadlock. They have a thread pool SynchronizationContext instead of a one-chunk-at-a-time SynchronizationContext, so when the await completes, it schedules the remainder of the async method on a thread pool thread. The method is able to complete, which completes its returned task, and there’s no deadlock. This difference in behavior can be confusing when programmers write a test console program, observe the partially async code work as expected, and then move the same code into a GUI or ASP.NET application, where it deadlocks.

The best solution to this problem is to allow async code to grow naturally through the codebase. If you follow this solution, you’ll see async code expand to its entry point, usually an event handler or controller action. Console applications can’t follow this solution fully because the Main method can’t be async. If the Main method were async, it could return before it completed, causing the program to end. Figure 4 demonstrates this exception to the guideline: The Main method for a console application is one of the few situations where code may block on an asynchronous method.

Figure 4 The Main Method May Call Task.Wait or Task.Result
class Program
{
  static void Main()
  {
    MainAsync().Wait();
  }
  static async Task MainAsync()
  {
    try
    {
      // Asynchronous implementation.
      await Task.Delay(1000);
    }
    catch (Exception ex)
    {
      // Handle exceptions.
    }
  }
}

Allowing async to grow through the codebase is the best solution, but this means there’s a lot of initial work for an application to see real benefit from async code. There are a few techniques for incrementally converting a large codebase to async code, but they’re outside the scope of this article. In some cases, using Task.Wait or Task.Result can help with a partial conversion, but you need to be aware of the deadlock problem as well as the error-handling problem. I’ll explain the error-handling problem now and show how to avoid the deadlock problem later in this article.

Every Task will store a list of exceptions. When you await a Task, the first exception is re-thrown, so you can catch the specific exception type (such as InvalidOperationException). However, when you synchronously block on a Task using Task.Wait or Task.Result, all of the exceptions are wrapped in an AggregateException and thrown. Refer again to Figure 4. The try/catch in MainAsync will catch a specific exception type, but if you put the try/catch in Main, then it will always catch an AggregateException. Error handling is much easier to deal with when you don’t have an AggregateException, so I put the “global” try/catch in MainAsync.

So far, I’ve shown two problems with blocking on async code: possible deadlocks and more-complicated error handling. There’s also a problem with using blocking code within an async method. Consider this simple example:

public static class NotFullyAsynchronousDemo
{
  // This method synchronously blocks a thread.
  public static async Task TestNotFullyAsync()
  {
    await Task.Yield();
    Thread.Sleep(5000);
  }
}

This method isn’t fully asynchronous. It will immediately yield, returning an incomplete task, but when it resumes it will synchronously block whatever thread is running. If this method is called from a GUI context, it will block the GUI thread; if it’s called from an ASP.NET request context, it will block the current ASP.NET request thread. Asynchronous code works best if it doesn’t synchronously block. Figure 5 is a cheat sheet of async replacements for synchronous operations.

Figure 5 The “Async Way” of Doing Things

To Do This … Instead of This … Use This
Retrieve the result of a background task Task.Wait or Task.Result await
Wait for any task to complete Task.WaitAny await Task.WhenAny
Retrieve the results of multiple tasks Task.WaitAll await Task.WhenAll
Wait a period of time Thread.Sleep await Task.Delay

To summarize this second guideline, you should avoid mixing async and blocking code. Mixed async and blocking code can cause deadlocks, more-complex error handling and unexpected blocking of context threads. The exception to this guideline is the Main method for console applications, or—if you’re an advanced user—managing a partially asynchronous codebase.

Configure Context

Earlier in this article, I briefly explained how the “context” is captured by default when an incomplete Task is awaited, and that this captured context is used to resume the async method. The example in Figure 3 shows how resuming on the context clashes with synchronous blocking to cause a deadlock. This context behavior can also cause another problem—one of performance. As asynchronous GUI applications grow larger, you might find many small parts of async methods all using the GUI thread as their context. This can cause sluggishness as responsiveness suffers from “thousands of paper cuts.”

To mitigate this, await the result of ConfigureAwait whenever you can. The following code snippet illustrates the default context behavior and the use of ConfigureAwait:

async Task MyMethodAsync()
{
  // Code here runs in the original context.
  await Task.Delay(1000);
  // Code here runs in the original context.
  await Task.Delay(1000).ConfigureAwait(
    continueOnCapturedContext: false);
  // Code here runs without the original
  // context (in this case, on the thread pool).
}

By using ConfigureAwait, you enable a small amount of parallelism: Some asynchronous code can run in parallel with the GUI thread instead of constantly badgering it with bits of work to do.

Aside from performance, ConfigureAwait has another important aspect: It can avoid deadlocks. Consider Figure 3 again; if you add “ConfigureAwait(false)” to the line of code in DelayAsync, then the deadlock is avoided. This time, when the await completes, it attempts to execute the remainder of the async method within the thread pool context. The method is able to complete, which completes its returned task, and there’s no deadlock. This technique is particularly useful if you need to gradually convert an application from synchronous to asynchronous.

If you can use ConfigureAwait at some point within a method, then I recommend you use it for every await in that method after that point. Recall that the context is captured only if an incomplete Task is awaited; if the Task is already complete, then the context isn’t captured. Some tasks might complete faster than expected in different hardware and network situations, and you need to graciously handle a returned task that completes before it’s awaited. Figure 6 shows a modified example.

Figure 6 Handling a Returned Task that Completes Before It’s Awaited
async Task MyMethodAsync()
{
  // Code here runs in the original context.
  await Task.FromResult(1);
  // Code here runs in the original context.
  await Task.FromResult(1).ConfigureAwait(continueOnCapturedContext: false);
  // Code here runs in the original context.
  var random = new Random();
  int delay = random.Next(2); // Delay is either 0 or 1
  await Task.Delay(delay).ConfigureAwait(continueOnCapturedContext: false);
  // Code here might or might not run in the original context.
  // The same is true when you await any Task
  // that might complete very quickly.
}

You should not use ConfigureAwait when you have code after the await in the method that needs the context. For GUI apps, this includes any code that manipulates GUI elements, writes data-bound properties or depends on a GUI-specific type such as Dispatcher/CoreDispatcher. For ASP.NET apps, this includes any code that uses HttpContext.Current or builds an ASP.NET response, including return statements in controller actions. Figure 7demonstrates one common pattern in GUI apps—having an async event handler disable its control at the beginning of the method, perform some awaits and then re-enable its control at the end of the handler; the event handler can’t give up its context because it needs to re-enable its control.

Figure 7 Having an Async Event Handler Disable and Re-Enable Its Control
private async void button1_Click(object sender, EventArgs e)
{
  button1.Enabled = false;
  try
  {
    // Can't use ConfigureAwait here ...
    await Task.Delay(1000);
  }
  finally
  {
    // Because we need the context here.
    button1.Enabled = true;
  }
}

Each async method has its own context, so if one async method calls another async method, their contexts are independent. Figure 8 shows a minor modification of Figure 7.

Figure 8 Each Async Method Has Its Own Context
private async Task HandleClickAsync()
{
  // Can use ConfigureAwait here.
  await Task.Delay(1000).ConfigureAwait(continueOnCapturedContext: false);
}
private async void button1_Click(object sender, EventArgs e)
{
  button1.Enabled = false;
  try
  {
    // Can't use ConfigureAwait here.
    await HandleClickAsync();
  }
  finally
  {
    // We are back on the original context for this method.
    button1.Enabled = true;
  }
}

Context-free code is more reusable. Try to create a barrier in your code between the context-sensitive code and context-free code, and minimize the context-sensitive code. In Figure 8, I recommend putting all the core logic of the event handler within a testable and context-free async Task method, leaving only the minimal code in the context-sensitive event handler. Even if you’re writing an ASP.NET application, if you have a core library that’s potentially shared with desktop applications, consider using ConfigureAwait in the library code.

To summarize this third guideline, you should use Configure­Await when possible. Context-free code has better performance for GUI applications and is a useful technique for avoiding deadlocks when working with a partially async codebase. The exceptions to this guideline are methods that require the context.

Know Your Tools

There’s a lot to learn about async and await, and it’s natural to get a little disoriented. Figure 9 is a quick reference of solutions to common problems.

Figure 9 Solutions to Common Async Problems

Problem Solution
Create a task to execute code Task.Run or TaskFactory.StartNew (not the Task constructor or Task.Start)
Create a task wrapper for an operation or event TaskFactory.FromAsync or TaskCompletionSource<T>
Support cancellation CancellationTokenSource and CancellationToken
Report progress IProgress<T> and Progress<T>
Handle streams of data TPL Dataflow or Reactive Extensions
Synchronize access to a shared resource SemaphoreSlim
Asynchronously initialize a resource AsyncLazy<T>
Async-ready producer/consumer structures TPL Dataflow or AsyncCollection<T>

The first problem is task creation. Obviously, an async method can create a task, and that’s the easiest option. If you need to run code on the thread pool, use Task.Run. If you want to create a task wrapper for an existing asynchronous operation or event, use TaskCompletionSource<T>. The next common problem is how to handle cancellation and progress reporting. The base class library (BCL) includes types specifically intended to solve these issues: CancellationTokenSource/CancellationToken and IProgress<T>/Progress<T>. Asynchronous code should use the Task-based Asynchronous Pattern, or TAP (msdn.microsoft.com/library/hh873175), which explains task creation, cancellation and progress reporting in detail.

Another problem that comes up is how to handle streams of asynchronous data. Tasks are great, but they can only return one object and only complete once. For asynchronous streams, you can use either TPL Dataflow or Reactive Extensions (Rx). TPL Dataflow creates a “mesh” that has an actor-like feel to it. Rx is more powerful and efficient but has a more difficult learning curve. Both TPL Dataflow and Rx have async-ready methods and work well with asynchronous code.

Just because your code is asynchronous doesn’t mean that it’s safe. Shared resources still need to be protected, and this is complicated by the fact that you can’t await from inside a lock. Here’s an example of async code that can corrupt shared state if it executes twice, even if it always runs on the same thread:

int value;

Task<int> GetNextValueAsync(int current);

async Task UpdateValueAsync()

{

  value = await GetNextValueAsync(value);

}

The problem is that the method reads the value and suspends itself at the await, and when the method resumes it assumes the value hasn’t changed. To solve this problem, the SemaphoreSlim class was augmented with the async-ready WaitAsync overloads. Figure 10 demonstrates SemaphoreSlim.WaitAsync.

Figure 10 SemaphoreSlim Permits Asynchronous Synchronization
SemaphoreSlim mutex = new SemaphoreSlim(1);

int value;

Task<int> GetNextValueAsync(int current);

async Task UpdateValueAsync()
{
  await mutex.WaitAsync().ConfigureAwait(false);

  try
  {
    value = await GetNextValueAsync(value);
  }
  finally
  {
    mutex.Release();
  }
}

Asynchronous code is often used to initialize a resource that’s then cached and shared. There isn’t a built-in type for this, but Stephen Toub developed an AsyncLazy<T> that acts like a merge of Task<T> and Lazy<T>. The original type is described on his blog (bit.ly/dEN178), and an updated version is available in my AsyncEx library (nitoasyncex.codeplex.com).

Finally, some async-ready data structures are sometimes needed. TPL Dataflow provides a BufferBlock<T> that acts like an async-ready producer/consumer queue. Alternatively, AsyncEx provides AsyncCollection<T>, which is an async version of BlockingCollection<T>.

I hope the guidelines and pointers in this article have been helpful. Async is a truly awesome language feature, and now is a great time to start using it!

By

Resolving The WordPress Multisite Redirect Loop

This is a re-post from an original article by Tom Mcfarlin located here


Though I do the majority of my work using single site WordPress installs, there are a number of sites and projects in which I’ve used WordPress multisite and there’s a problem that I’ve experienced specifically with using WordPress multisite, subdomains, and shared hosting environments.

Specifically, the problem is this:

  • Install WordPress and activate multisite
  • Configure the installation to use subdomains (versus subdirectories)
  • Attempt to login and get stuck in a redirect loop

If you have a single instance of WordPress multisite installed on the same server, there’s no issue, but if you go beyond that then you normally hit a problem: a redirect loop.

The WordPress Multisite Redirect Loop

The WordPress Login Screen

The most frustrating screen ever (in a redirect loop, that is).

Once you’ve increased the number of your multisite installs beyond one, then you’re likely to be unable to login as you’ll get stuck in a redirect loop. That is, every time you try to login, you’re returned to the login screen.

Luckily, the fix is relatively easy.

In your wp-config.php file, add the following lines of code:

define(\'ADMIN_COOKIE_PATH\', \'/\');
define(\'COOKIE_DOMAIN\', \'\');
define(\'COOKIEPATH\', \'\');
define(\'SITECOOKIEPATH\', \'\');

And do so just before the line that reads:

/* That\'s all, stop editing! Happy blogging. */

Once done, the redirect issue should be resolved.

Why Does This Happen?

Whenever you’re running multiple versions of WordPress on the same server, you can visualize the setup like this:

WordPress Multisite Installation

WordPress Multisite Installation

Basically, each version of WordPress, regardless of its domain or subdomain, maps to a single IP address. In this case, 192.168.0.1.

When a request comes into the server, part of the request includes the domain. A domain is associated with an IP address. When a cookie is created, it includes the name, some sensitive content, and then the path.

For example:

NAME = wordpress_d676ec21cf050e966685794aa715694f
CONTENT = removed
PATH = /sitename/wp-admin

In a WordPress Multisite setup a cookie for two sites may look like this:

NAME = wordpress_d676ec21cf050e966685794aa715694f
PATH = /sitename/wp-admin

NAME = wordpress_d676ec21cf050e966685794aa715694f
PATH = /sitename/

Notice that the name of the two cookies above are exactly the same but the path’s are different. This is because two different sites with different domains are hosted on the same IP address, and they both exist in the cookie because the cookies aren’t being reset.

Cookies Being Set By WordPress

Cookies being set for the different sites on the same domain.

As such, when you attempt to login to a WordPress installation on a different domain (but on the same IP), the cookie is essentially invalid.

Thus, WordPress – in the most technical term possible – wigs out.

But more seriously, wp-login doesn’t attempt to look for cookies before actually setting them. This means that an invalid cookie is being used and since it doesn’t attempt to clear the existing cookies, you get stuck in the login loop.

Thus, the big picture looks something like this:

WordPress Multisite and Cookies

WordPress Multisite and Cookies

Sure, clearing the cookies will do the trick, but users shouldn’t have to do that. Additionally, not everyone will see this problem occur, but if you’re in the business of managing a multisite installation in a shared environment, then you’re likely to see it.

The code above will ensure that WordPress is clearing the cookie for the given domain of the multisite thus allowing the login process to set it correctly.

By

Ubuntu 15.04 – Configure your system to have x11vnc running at startup

This article was originally posted here.

Hello World,

If you are following us, you probably remember that we wrote already a post about this topic (see Ubuntu 14.10 – Configure your sytem to have x11vnc running at startup).
Since Ubuntu 15.04 is using systemd, the instructions found in the
previous post are not applicable anymore.  Some of our readers had
issues after upgrading to Ubuntu 15.04.  The x11VNC is not running at
startup anymore.

This post will provide the necessary information to have x11vnc running at startup on ubuntu 15.04 when systemd is used.

 

Our Goal !

At the end of this post, you should be able to connect via vnc to
your Ubuntu machine even if there is a reboot and even if no user are
logged into the machine.  This configuration should display the login
screen via vnc viewer client you are using.

We didn’t invent anything here.  All the
information provided here are based on the information made available
at this location :  https://help.ubuntu.com/community/VNC/Servers#Have_x11vnc_start_automatically_via_systemd_in_any_environment_.28Vivid.2B-.29

Installing x11vnc server

In this post, we have decided to use the
x11vnc server package to provide vnc capabilities.  The installation
process is quite straight forward.  Log into your ubuntu 15.04 machine,
open the terminal console and issue the following command :

sudo apt-get install x11vnc

VncUb14.10_1

Click on Picture for Better Resolution

To have a minimum of security, we will protect the vnc connection via
a password.  The password will be stored in a file.  To create this
file, you will need to issue the following command

sudo x11vnc storepasswd /etc/x11vnc.pass

You will be asked to enter a password. Enter the password and confirm your choice and you should be good to go

VncUb14.10_2

Click on Picture for Better Resolution

Create the Service Unit file

So far, we have just issued standard command related to the x11vnc
package.  We need to create the service unit file for our x11vnc
service.  To do this, we will issue the following command :

sudo nano /lib/systemd/system/x11vnc.service

This file should content the following lines

[Unit]
Description=Start x11vnc at startup.
After=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /etc/x11vnc.pass -rfbport 5900 -shared
[Install]
WantedBy=multi-user.target

Save the file

Configure Systemd

It’s time to issue the command to have systemd aware of the change
and make the service running at startup.  In a command prompt, you will
issue the following command :

sudo systemctl daemon-reload
sudo systemctl enable x11vnc.service

 

Restart the system and do not login.  We will check if this is working…..

Testing the solution !

To check that you can indeed perform a vnc connection to your Ubuntu
Machine, you will try to connect to it using your favourite vncviewer
(we are using TigerVnc) while nobody is connected and just after a
reboot of the machine.

In the vncviewer, you will provide the
ip address or hostname of the machine to connect and the port to be
used.  In our example, he port used is 5900.  If you have set a password to protect your vnc connection, you will be prompted for a password as well.

VncUb14.10_6

Click on Picture for Better Resolution

If everything is ok, you should see the Ubutun login page displayed inside your vncviewer

TigerVnc_U15.04.png

Click on Picture for Better Resolution

 

Final Notes

And voila !  We have sucessfully updated
the instructions on how to have x11vnc run at startup.  As you can see,
since Ubuntu 15.04 is using the Systemd solution, we need to create our
service unit files (x11vnc.service) and register them with systemctl
and we are done.

Pff… the last days I have updated some
of the most popular posts about xrdp, x11vnc and ubuntu 15.04… It’s time
for me to take a break…

Till next time