Pretty good backup script for linux folders

This was originally taken from here with some modifications by me

Automating backups with tar

It is always interesting to automate the tasks of a backup. Automation offers enormous opportunities for using your Linux server to achieve the goals you set. The following example below is our backup script, called backup.cron. This script is designed to run on any computer by changing only the five variables:

  1. COMPUTER
  2. DIRECTORIES
  3. BACKUPDIR
  4. TIMEDIR
  5. BACKUPSET

We suggest that you set this script up and run it at the beginning of the month for the first time, and then run it for a month before making major changes. In our example below we do the backup to a directory on the local server BACKUPDIR, but you could modify this script to do it to a tape on the local server or via an NFS mounted file system.

  1. Create the backup script backup.cron file, touch /etc/cron.daily/backup.cron and add the following lines to this backup file:
    #!/bin/sh
    # full and incremental backup script
    # created 07 February 2000
    # Based on a script by Daniel O'Callaghan <danny@freebsd.org>
    # and modified by Gerhard Mourani <gmourani@videotron.ca>
    # and modified by Shawn Anderson <sanderson@eye-catcher.com> on 2016-08-14
    #Change the 5 variables below to fit your computer/backup
    
    COMPUTER=$(hostname) # name of this computer
    BACKUPSET=HOMEDIR # name of the backup set
    DIRECTORIES="/home" # directories to backup
    BACKUPDIR=/backups # where to store the backups
    TIMEDIR=/backups/last-full # where to store time of full backup
    TAR=/bin/tar # name and location of tar
    
    #You should not have to change anything below here
    PATH=/usr/local/bin:/usr/bin:/bin
    DOW=`date +%a` # Day of the week e.g. Mon
    DOM=`date +%d` # Date of the Month e.g. 27
    DM=`date +%d%b` # Date and Month e.g. 27Sep
    
    #Set various things up
    
    # Is PV installed?
    type pv &gt;/dev/null 2>&1 || sudo apt-get install pv
    
    # Do the required paths exist
    if [ ! -d $BACKUPDIR ]; then
       mkdir $BACKUPDIR
    fi
    
    if [ ! -d $TIMEDIR ]; then
       mkdir $TIMEDIR
    fi
    
    # On the 1st of the month a permanent full backup is made
    # Every Sunday a full backup is made - overwriting last Sundays backup
    # The rest of the time an incremental backup is made. Each incremental
    # backup overwrites last weeks incremental backup of the same name.
    #
    # if NEWER = "", then tar backs up all files in the directories
    # otherwise it backs up files newer than the NEWER date. NEWER
    # gets it date from the file written every Sunday.
    
    # Monthly full backup
    if [ $DOM = "01" ]; then
       NEWER=""
       $TAR $NEWER cf - -C $DIRECTORIES/* | pv -s $(du -sb $DIRECTORIES | awk '{print $1}') | gzip > $BACKUPDIR/$BACKUPSET-$COMPUTER-$DM.tgz
    fi
    
    # Weekly full backup
    if [ $DOW = "Sun" ]; then
       NEWER=""
       NOW=`date +%d-%b`
    
       # Update full backup date
       echo $NOW &gt; $TIMEDIR/$COMPUTER-full-date
       $TAR $NEWER cf - -C $DIRECTORIES/* | pv -s $(du -sb $DIRECTORIES | awk '{print $1}') | gzip > $BACKUPDIR/$BACKUPSET-$COMPUTER-$DOW.tgz
    
    # Make incremental backup - overwrite last weeks
    else
       # Get date of last full backup
       NEWER="--newer `cat $TIMEDIR/$COMPUTER-full-date`"
       $TAR $NEWER cf - -C $DIRECTORIES/* | pv -s $(du -sb $DIRECTORIES | awk '{print $1}') | gzip > $BACKUPDIR/$BACKUPSET-$COMPUTER-$DOW.tgz
    fi
    
    # Remove backup files older than 90 days (this really shouldn't be necessary unless something
    # isn't right with the auto-rotation. I have it in just for good measures
    find $BACKUPDIR/$BACKUPSET-$COMPUTER* -mtime +90 -exec rm {} \;
    Example 33-1. Backup directory of a week
    Here is an abbreviated look of the backup directory after one week:

    total 22217
    -rw-r--r-- 1 root root 10731288 Feb 7 11:24 deep-HOMEDIR-01Feb.<b class="command">tar</b>
    -rw-r--r-- 1 root root 6879 Feb 7 11:24 deep-HOMEDIR-Fri.<b class="command">tar</b>
    -rw-r--r-- 1 root root 2831 Feb 7 11:24 deep-HOMEDIR-Mon.<b class="command">tar</b>
    -rw-r--r-- 1 root root 7924 Feb 7 11:25 deep-HOMEDIR-Sat.<b class="command">tar</b>
    -rw-r--r-- 1 root root 11923013 Feb 7 11:24 deep-HOMEDIR-Sun.<b class="command">tar</b>
    -rw-r--r-- 1 root root 5643 Feb 7 11:25 deep-HOMEDIR-Thu.<b class="command">tar</b>
    -rw-r--r-- 1 root root 3152 Feb 7 11:25 deep-HOMEDIR-Tue.<b class="command">tar</b>
    -rw-r--r-- 1 root root 4567 Feb 7 11:25 deep-HOMEDIR-Wed.<b class="command">tar</b>
    drwxr-xr-x 2 root root 1024 Feb 7 11:20 last-full
    

    Important: The directory where to store the backups BACKUPDIR, and the directory where to store time of full backup TIMEDIR must exist or be created before the use of the backup-script, or you will receive an error message.

  2. If you are not running this backup script from the beginning of the month 01-month-year, the incremental backups will need the time of the Sunday backup to be able to work properly. If you start in the middle of the week, you will need to create the time file in the TIMEDIR. To create the time file in the TIMEDIR directory, use the following command:
    [root@deep] /# date +%d%b < /backups/last-full/myserver-full-date

    Where /backups/last-full is our variable TIMEDIR wherein we want to store the time of the full backup, and myserver-full-date is the name of our server e.g. deep, and our time file consists of a single line with the present date i.e. 15-Feb.

  3. Make this script executable and change its default permissions to be writable only by the super-user root 755.
    [root@deep] /# chmod 755 /etc/cron.daily/backup.cron

Because this script is in the /etc/cron.daily directory, it will be automatically run as a cron job at one o’clock in the morning every day.

Async/Await – Best Practices in Asynchronous Programming

By Stephen Cleary | March 2013 (repost from here)

These days there’s a wealth of information about the new async and await support in the Microsoft .NET Framework 4.5. This article is intended as a “second step” in learning asynchronous programming; I assume that you’ve read at least one introductory article about it. This article presents nothing new, as the same advice can be found online in sources such as Stack Overflow, MSDN forums and the async/await FAQ. This article just highlights a few best practices that can get lost in the avalanche of available documentation.

The best practices in this article are more what you’d call “guidelines” than actual rules. There are exceptions to each of these guidelines. I’ll explain the reasoning behind each guideline so that it’s clear when it does and does not apply. The guidelines are summarized in Figure 1; I’ll discuss each in the following sections.

Figure 1 Summary of Asynchronous Programming Guidelines

Name Description Exceptions
Avoid async void Prefer async Task methods over async void methods Event handlers
Async all the way Don’t mix blocking and async code Console main method
Configure context Use ConfigureAwait(false) when you can Methods that require con­text

Avoid Async Void

There are three possible return types for async methods: Task, Task<T> and void, but the natural return types for async methods are just Task and Task<T>. When converting from synchronous to asynchronous code, any method returning a type T becomes an async method returning Task<T>, and any method returning void becomes an async method returning Task. The following code snippet illustrates a synchronous void-returning method and its asynchronous equivalent:

void MyMethod()
{
  // Do synchronous work.
  Thread.Sleep(1000);
}
async Task MyMethodAsync()
{
  // Do asynchronous work.
  await Task.Delay(1000);
}

Void-returning async methods have a specific purpose: to make asynchronous event handlers possible. It is possible to have an event handler that returns some actual type, but that doesn’t work well with the language; invoking an event handler that returns a type is very awkward, and the notion of an event handler actually returning something doesn’t make much sense. Event handlers naturally return void, so async methods return void so that you can have an asynchronous event handler. However, some semantics of an async void method are subtly different than the semantics of an async Task or async Task<T> method.

Async void methods have different error-handling semantics. When an exception is thrown out of an async Task or async Task<T> method, that exception is captured and placed on the Task object. With async void methods, there is no Task object, so any exceptions thrown out of an async void method will be raised directly on the SynchronizationContext that was active when the async void method started. Figure 2 illustrates that exceptions thrown from async void methods can’t be caught naturally.

Figure 2 Exceptions from an Async Void Method Can’t Be Caught with Catch
private async void ThrowExceptionAsync()
{
  throw new InvalidOperationException();
}
public void AsyncVoidExceptions_CannotBeCaughtByCatch()
{
  try
  {
    ThrowExceptionAsync();
  }
  catch (Exception)
  {
    // The exception is never caught here!
    throw;
  }
}

These exceptions can be observed using AppDomain.UnhandledException or a similar catch-all event for GUI/ASP.NET applications, but using those events for regular exception handling is a recipe for unmaintainability.

Async void methods have different composing semantics. Async methods returning Task or Task<T> can be easily composed using await, Task.WhenAny, Task.WhenAll and so on. Async methods returning void don’t provide an easy way to notify the calling code that they’ve completed. It’s easy to start several async void methods, but it’s not easy to determine when they’ve finished. Async void methods will notify their SynchronizationContext when they start and finish, but a custom SynchronizationContext is a complex solution for regular application code.

Async void methods are difficult to test. Because of the differences in error handling and composing, it’s difficult to write unit tests that call async void methods. The MSTest asynchronous testing support only works for async methods returning Task or Task<T>. It’s possible to install a SynchronizationContext that detects when all async void methods have completed and collects any exceptions, but it’s much easier to just make the async void methods return Task instead.

It’s clear that async void methods have several disadvantages compared to async Task methods, but they’re quite useful in one particular case: asynchronous event handlers. The differences in semantics make sense for asynchronous event handlers. They raise their exceptions directly on the SynchronizationContext, which is similar to how synchronous event handlers behave. Synchronous event handlers are usually private, so they can’t be composed or directly tested. An approach I like to take is to minimize the code in my asynchronous event handler—for example, have it await an async Task method that contains the actual logic. The following code illustrates this approach, using async void methods for event handlers without sacrificing testability:

private async void button1_Click(object sender, EventArgs e)
{
  await Button1ClickAsync();
}
public async Task Button1ClickAsync()
{
  // Do asynchronous work.
  await Task.Delay(1000);
}

Async void methods can wreak havoc if the caller isn’t expecting them to be async. When the return type is Task, the caller knows it’s dealing with a future operation; when the return type is void, the caller might assume the method is complete by the time it returns. This problem can crop up in many unexpected ways. It’s usually wrong to provide an async implementation (or override) of a void-returning method on an interface (or base class). Some events also assume that their handlers are complete when they return. One subtle trap is passing an async lambda to a method taking an Action parameter; in this case, the async lambda returns void and inherits all the problems of async void methods. As a general rule, async lambdas should only be used if they’re converted to a delegate type that returns Task (for example, Func<Task>).

To summarize this first guideline, you should prefer async Task to async void. Async Task methods enable easier error-handling, composability and testability. The exception to this guideline is asynchronous event handlers, which must return void. This exception includes methods that are logically event handlers even if they’re not literally event handlers (for example, ICommand.Execute implementations).

Async All the Way

Asynchronous code reminds me of the story of a fellow who mentioned that the world was suspended in space and was immediately challenged by an elderly lady claiming that the world rested on the back of a giant turtle. When the man enquired what the turtle was standing on, the lady replied, “You’re very clever, young man, but it’s turtles all the way down!” As you convert synchronous code to asynchronous code, you’ll find that it works best if asynchronous code calls and is called by other asynchronous code—all the way down (or “up,” if you prefer). Others have also noticed the spreading behavior of asynchronous programming and have called it “contagious” or compared it to a zombie virus. Whether turtles or zombies, it’s definitely true that asynchronous code tends to drive surrounding code to also be asynchronous. This behavior is inherent in all types of asynchronous programming, not just the new async/await keywords.

“Async all the way” means that you shouldn’t mix synchronous and asynchronous code without carefully considering the consequences. In particular, it’s usually a bad idea to block on async code by calling Task.Wait or Task.Result. This is an especially common problem for programmers who are “dipping their toes” into asynchronous programming, converting just a small part of their application and wrapping it in a synchronous API so the rest of the application is isolated from the changes. Unfortunately, they run into problems with deadlocks. After answering many async-related questions on the MSDN forums, Stack Overflow and e-mail, I can say this is by far the most-asked question by async newcomers once they learn the basics: “Why does my partially async code deadlock?”

Figure 3 shows a simple example where one method blocks on the result of an async method. This code will work just fine in a console application but will deadlock when called from a GUI or ASP.NET context. This behavior can be confusing, especially considering that stepping through the debugger implies that it’s the await that never completes. The actual cause of the deadlock is further up the call stack when Task.Wait is called.

Figure 3 A Common Deadlock Problem When Blocking on Async Code
public static class DeadlockDemo
{
  private static async Task DelayAsync()
  {
    await Task.Delay(1000);
  }
  // This method causes a deadlock when called in a GUI or ASP.NET context.
  public static void Test()
  {
    // Start the delay.
    var delayTask = DelayAsync();
    // Wait for the delay to complete.
    delayTask.Wait();
  }
}

The root cause of this deadlock is due to the way await handles contexts. By default, when an incomplete Task is awaited, the current “context” is captured and used to resume the method when the Task completes. This “context” is the current SynchronizationContext unless it’s null, in which case it’s the current TaskScheduler. GUI and ASP.NET applications have a SynchronizationContext that permits only one chunk of code to run at a time. When the await completes, it attempts to execute the remainder of the async method within the captured context. But that context already has a thread in it, which is (synchronously) waiting for the async method to complete. They’re each waiting for the other, causing a deadlock.

Note that console applications don’t cause this deadlock. They have a thread pool SynchronizationContext instead of a one-chunk-at-a-time SynchronizationContext, so when the await completes, it schedules the remainder of the async method on a thread pool thread. The method is able to complete, which completes its returned task, and there’s no deadlock. This difference in behavior can be confusing when programmers write a test console program, observe the partially async code work as expected, and then move the same code into a GUI or ASP.NET application, where it deadlocks.

The best solution to this problem is to allow async code to grow naturally through the codebase. If you follow this solution, you’ll see async code expand to its entry point, usually an event handler or controller action. Console applications can’t follow this solution fully because the Main method can’t be async. If the Main method were async, it could return before it completed, causing the program to end. Figure 4 demonstrates this exception to the guideline: The Main method for a console application is one of the few situations where code may block on an asynchronous method.

Figure 4 The Main Method May Call Task.Wait or Task.Result
class Program
{
  static void Main()
  {
    MainAsync().Wait();
  }
  static async Task MainAsync()
  {
    try
    {
      // Asynchronous implementation.
      await Task.Delay(1000);
    }
    catch (Exception ex)
    {
      // Handle exceptions.
    }
  }
}

Allowing async to grow through the codebase is the best solution, but this means there’s a lot of initial work for an application to see real benefit from async code. There are a few techniques for incrementally converting a large codebase to async code, but they’re outside the scope of this article. In some cases, using Task.Wait or Task.Result can help with a partial conversion, but you need to be aware of the deadlock problem as well as the error-handling problem. I’ll explain the error-handling problem now and show how to avoid the deadlock problem later in this article.

Every Task will store a list of exceptions. When you await a Task, the first exception is re-thrown, so you can catch the specific exception type (such as InvalidOperationException). However, when you synchronously block on a Task using Task.Wait or Task.Result, all of the exceptions are wrapped in an AggregateException and thrown. Refer again to Figure 4. The try/catch in MainAsync will catch a specific exception type, but if you put the try/catch in Main, then it will always catch an AggregateException. Error handling is much easier to deal with when you don’t have an AggregateException, so I put the “global” try/catch in MainAsync.

So far, I’ve shown two problems with blocking on async code: possible deadlocks and more-complicated error handling. There’s also a problem with using blocking code within an async method. Consider this simple example:

public static class NotFullyAsynchronousDemo
{
  // This method synchronously blocks a thread.
  public static async Task TestNotFullyAsync()
  {
    await Task.Yield();
    Thread.Sleep(5000);
  }
}

This method isn’t fully asynchronous. It will immediately yield, returning an incomplete task, but when it resumes it will synchronously block whatever thread is running. If this method is called from a GUI context, it will block the GUI thread; if it’s called from an ASP.NET request context, it will block the current ASP.NET request thread. Asynchronous code works best if it doesn’t synchronously block. Figure 5 is a cheat sheet of async replacements for synchronous operations.

Figure 5 The “Async Way” of Doing Things

To Do This … Instead of This … Use This
Retrieve the result of a background task Task.Wait or Task.Result await
Wait for any task to complete Task.WaitAny await Task.WhenAny
Retrieve the results of multiple tasks Task.WaitAll await Task.WhenAll
Wait a period of time Thread.Sleep await Task.Delay

To summarize this second guideline, you should avoid mixing async and blocking code. Mixed async and blocking code can cause deadlocks, more-complex error handling and unexpected blocking of context threads. The exception to this guideline is the Main method for console applications, or—if you’re an advanced user—managing a partially asynchronous codebase.

Configure Context

Earlier in this article, I briefly explained how the “context” is captured by default when an incomplete Task is awaited, and that this captured context is used to resume the async method. The example in Figure 3 shows how resuming on the context clashes with synchronous blocking to cause a deadlock. This context behavior can also cause another problem—one of performance. As asynchronous GUI applications grow larger, you might find many small parts of async methods all using the GUI thread as their context. This can cause sluggishness as responsiveness suffers from “thousands of paper cuts.”

To mitigate this, await the result of ConfigureAwait whenever you can. The following code snippet illustrates the default context behavior and the use of ConfigureAwait:

async Task MyMethodAsync()
{
  // Code here runs in the original context.
  await Task.Delay(1000);
  // Code here runs in the original context.
  await Task.Delay(1000).ConfigureAwait(
    continueOnCapturedContext: false);
  // Code here runs without the original
  // context (in this case, on the thread pool).
}

By using ConfigureAwait, you enable a small amount of parallelism: Some asynchronous code can run in parallel with the GUI thread instead of constantly badgering it with bits of work to do.

Aside from performance, ConfigureAwait has another important aspect: It can avoid deadlocks. Consider Figure 3 again; if you add “ConfigureAwait(false)” to the line of code in DelayAsync, then the deadlock is avoided. This time, when the await completes, it attempts to execute the remainder of the async method within the thread pool context. The method is able to complete, which completes its returned task, and there’s no deadlock. This technique is particularly useful if you need to gradually convert an application from synchronous to asynchronous.

If you can use ConfigureAwait at some point within a method, then I recommend you use it for every await in that method after that point. Recall that the context is captured only if an incomplete Task is awaited; if the Task is already complete, then the context isn’t captured. Some tasks might complete faster than expected in different hardware and network situations, and you need to graciously handle a returned task that completes before it’s awaited. Figure 6 shows a modified example.

Figure 6 Handling a Returned Task that Completes Before It’s Awaited
async Task MyMethodAsync()
{
  // Code here runs in the original context.
  await Task.FromResult(1);
  // Code here runs in the original context.
  await Task.FromResult(1).ConfigureAwait(continueOnCapturedContext: false);
  // Code here runs in the original context.
  var random = new Random();
  int delay = random.Next(2); // Delay is either 0 or 1
  await Task.Delay(delay).ConfigureAwait(continueOnCapturedContext: false);
  // Code here might or might not run in the original context.
  // The same is true when you await any Task
  // that might complete very quickly.
}

You should not use ConfigureAwait when you have code after the await in the method that needs the context. For GUI apps, this includes any code that manipulates GUI elements, writes data-bound properties or depends on a GUI-specific type such as Dispatcher/CoreDispatcher. For ASP.NET apps, this includes any code that uses HttpContext.Current or builds an ASP.NET response, including return statements in controller actions. Figure 7demonstrates one common pattern in GUI apps—having an async event handler disable its control at the beginning of the method, perform some awaits and then re-enable its control at the end of the handler; the event handler can’t give up its context because it needs to re-enable its control.

Figure 7 Having an Async Event Handler Disable and Re-Enable Its Control
private async void button1_Click(object sender, EventArgs e)
{
  button1.Enabled = false;
  try
  {
    // Can't use ConfigureAwait here ...
    await Task.Delay(1000);
  }
  finally
  {
    // Because we need the context here.
    button1.Enabled = true;
  }
}

Each async method has its own context, so if one async method calls another async method, their contexts are independent. Figure 8 shows a minor modification of Figure 7.

Figure 8 Each Async Method Has Its Own Context
private async Task HandleClickAsync()
{
  // Can use ConfigureAwait here.
  await Task.Delay(1000).ConfigureAwait(continueOnCapturedContext: false);
}
private async void button1_Click(object sender, EventArgs e)
{
  button1.Enabled = false;
  try
  {
    // Can't use ConfigureAwait here.
    await HandleClickAsync();
  }
  finally
  {
    // We are back on the original context for this method.
    button1.Enabled = true;
  }
}

Context-free code is more reusable. Try to create a barrier in your code between the context-sensitive code and context-free code, and minimize the context-sensitive code. In Figure 8, I recommend putting all the core logic of the event handler within a testable and context-free async Task method, leaving only the minimal code in the context-sensitive event handler. Even if you’re writing an ASP.NET application, if you have a core library that’s potentially shared with desktop applications, consider using ConfigureAwait in the library code.

To summarize this third guideline, you should use Configure­Await when possible. Context-free code has better performance for GUI applications and is a useful technique for avoiding deadlocks when working with a partially async codebase. The exceptions to this guideline are methods that require the context.

Know Your Tools

There’s a lot to learn about async and await, and it’s natural to get a little disoriented. Figure 9 is a quick reference of solutions to common problems.

Figure 9 Solutions to Common Async Problems

Problem Solution
Create a task to execute code Task.Run or TaskFactory.StartNew (not the Task constructor or Task.Start)
Create a task wrapper for an operation or event TaskFactory.FromAsync or TaskCompletionSource<T>
Support cancellation CancellationTokenSource and CancellationToken
Report progress IProgress<T> and Progress<T>
Handle streams of data TPL Dataflow or Reactive Extensions
Synchronize access to a shared resource SemaphoreSlim
Asynchronously initialize a resource AsyncLazy<T>
Async-ready producer/consumer structures TPL Dataflow or AsyncCollection<T>

The first problem is task creation. Obviously, an async method can create a task, and that’s the easiest option. If you need to run code on the thread pool, use Task.Run. If you want to create a task wrapper for an existing asynchronous operation or event, use TaskCompletionSource<T>. The next common problem is how to handle cancellation and progress reporting. The base class library (BCL) includes types specifically intended to solve these issues: CancellationTokenSource/CancellationToken and IProgress<T>/Progress<T>. Asynchronous code should use the Task-based Asynchronous Pattern, or TAP (msdn.microsoft.com/library/hh873175), which explains task creation, cancellation and progress reporting in detail.

Another problem that comes up is how to handle streams of asynchronous data. Tasks are great, but they can only return one object and only complete once. For asynchronous streams, you can use either TPL Dataflow or Reactive Extensions (Rx). TPL Dataflow creates a “mesh” that has an actor-like feel to it. Rx is more powerful and efficient but has a more difficult learning curve. Both TPL Dataflow and Rx have async-ready methods and work well with asynchronous code.

Just because your code is asynchronous doesn’t mean that it’s safe. Shared resources still need to be protected, and this is complicated by the fact that you can’t await from inside a lock. Here’s an example of async code that can corrupt shared state if it executes twice, even if it always runs on the same thread:

int value;

Task<int> GetNextValueAsync(int current);

async Task UpdateValueAsync()

{

  value = await GetNextValueAsync(value);

}

The problem is that the method reads the value and suspends itself at the await, and when the method resumes it assumes the value hasn’t changed. To solve this problem, the SemaphoreSlim class was augmented with the async-ready WaitAsync overloads. Figure 10 demonstrates SemaphoreSlim.WaitAsync.

Figure 10 SemaphoreSlim Permits Asynchronous Synchronization
SemaphoreSlim mutex = new SemaphoreSlim(1);

int value;

Task<int> GetNextValueAsync(int current);

async Task UpdateValueAsync()
{
  await mutex.WaitAsync().ConfigureAwait(false);

  try
  {
    value = await GetNextValueAsync(value);
  }
  finally
  {
    mutex.Release();
  }
}

Asynchronous code is often used to initialize a resource that’s then cached and shared. There isn’t a built-in type for this, but Stephen Toub developed an AsyncLazy<T> that acts like a merge of Task<T> and Lazy<T>. The original type is described on his blog (bit.ly/dEN178), and an updated version is available in my AsyncEx library (nitoasyncex.codeplex.com).

Finally, some async-ready data structures are sometimes needed. TPL Dataflow provides a BufferBlock<T> that acts like an async-ready producer/consumer queue. Alternatively, AsyncEx provides AsyncCollection<T>, which is an async version of BlockingCollection<T>.

I hope the guidelines and pointers in this article have been helpful. Async is a truly awesome language feature, and now is a great time to start using it!

Resolving The WordPress Multisite Redirect Loop

This is a re-post from an original article by Tom Mcfarlin located here


Though I do the majority of my work using single site WordPress installs, there are a number of sites and projects in which I’ve used WordPress multisite and there’s a problem that I’ve experienced specifically with using WordPress multisite, subdomains, and shared hosting environments.

Specifically, the problem is this:

  • Install WordPress and activate multisite
  • Configure the installation to use subdomains (versus subdirectories)
  • Attempt to login and get stuck in a redirect loop

If you have a single instance of WordPress multisite installed on the same server, there’s no issue, but if you go beyond that then you normally hit a problem: a redirect loop.

The WordPress Multisite Redirect Loop

The WordPress Login Screen

The most frustrating screen ever (in a redirect loop, that is).

Once you’ve increased the number of your multisite installs beyond one, then you’re likely to be unable to login as you’ll get stuck in a redirect loop. That is, every time you try to login, you’re returned to the login screen.

Luckily, the fix is relatively easy.

In your wp-config.php file, add the following lines of code:

define(\'ADMIN_COOKIE_PATH\', \'/\');
define(\'COOKIE_DOMAIN\', \'\');
define(\'COOKIEPATH\', \'\');
define(\'SITECOOKIEPATH\', \'\');

And do so just before the line that reads:

/* That\'s all, stop editing! Happy blogging. */

Once done, the redirect issue should be resolved.

Why Does This Happen?

Whenever you’re running multiple versions of WordPress on the same server, you can visualize the setup like this:

WordPress Multisite Installation

WordPress Multisite Installation

Basically, each version of WordPress, regardless of its domain or subdomain, maps to a single IP address. In this case, 192.168.0.1.

When a request comes into the server, part of the request includes the domain. A domain is associated with an IP address. When a cookie is created, it includes the name, some sensitive content, and then the path.

For example:

NAME = wordpress_d676ec21cf050e966685794aa715694f
CONTENT = removed
PATH = /sitename/wp-admin

In a WordPress Multisite setup a cookie for two sites may look like this:

NAME = wordpress_d676ec21cf050e966685794aa715694f
PATH = /sitename/wp-admin

NAME = wordpress_d676ec21cf050e966685794aa715694f
PATH = /sitename/

Notice that the name of the two cookies above are exactly the same but the path’s are different. This is because two different sites with different domains are hosted on the same IP address, and they both exist in the cookie because the cookies aren’t being reset.

Cookies Being Set By WordPress

Cookies being set for the different sites on the same domain.

As such, when you attempt to login to a WordPress installation on a different domain (but on the same IP), the cookie is essentially invalid.

Thus, WordPress – in the most technical term possible – wigs out.

But more seriously, wp-login doesn’t attempt to look for cookies before actually setting them. This means that an invalid cookie is being used and since it doesn’t attempt to clear the existing cookies, you get stuck in the login loop.

Thus, the big picture looks something like this:

WordPress Multisite and Cookies

WordPress Multisite and Cookies

Sure, clearing the cookies will do the trick, but users shouldn’t have to do that. Additionally, not everyone will see this problem occur, but if you’re in the business of managing a multisite installation in a shared environment, then you’re likely to see it.

The code above will ensure that WordPress is clearing the cookie for the given domain of the multisite thus allowing the login process to set it correctly.

Ubuntu 15.04 – Configure your system to have x11vnc running at startup

This article was originally posted here.

Hello World,

If you are following us, you probably remember that we wrote already a post about this topic (see Ubuntu 14.10 – Configure your sytem to have x11vnc running at startup).
Since Ubuntu 15.04 is using systemd, the instructions found in the
previous post are not applicable anymore.  Some of our readers had
issues after upgrading to Ubuntu 15.04.  The x11VNC is not running at
startup anymore.

This post will provide the necessary information to have x11vnc running at startup on ubuntu 15.04 when systemd is used.

 

Our Goal !

At the end of this post, you should be able to connect via vnc to
your Ubuntu machine even if there is a reboot and even if no user are
logged into the machine.  This configuration should display the login
screen via vnc viewer client you are using.

We didn’t invent anything here.  All the
information provided here are based on the information made available
at this location :  https://help.ubuntu.com/community/VNC/Servers#Have_x11vnc_start_automatically_via_systemd_in_any_environment_.28Vivid.2B-.29

Installing x11vnc server

In this post, we have decided to use the
x11vnc server package to provide vnc capabilities.  The installation
process is quite straight forward.  Log into your ubuntu 15.04 machine,
open the terminal console and issue the following command :

sudo apt-get install x11vnc

VncUb14.10_1

Click on Picture for Better Resolution

To have a minimum of security, we will protect the vnc connection via
a password.  The password will be stored in a file.  To create this
file, you will need to issue the following command

sudo x11vnc storepasswd /etc/x11vnc.pass

You will be asked to enter a password. Enter the password and confirm your choice and you should be good to go

VncUb14.10_2

Click on Picture for Better Resolution

Create the Service Unit file

So far, we have just issued standard command related to the x11vnc
package.  We need to create the service unit file for our x11vnc
service.  To do this, we will issue the following command :

sudo nano /lib/systemd/system/x11vnc.service

This file should content the following lines

[Unit]
Description=Start x11vnc at startup.
After=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /etc/x11vnc.pass -rfbport 5900 -shared
[Install]
WantedBy=multi-user.target

Save the file

Configure Systemd

It’s time to issue the command to have systemd aware of the change
and make the service running at startup.  In a command prompt, you will
issue the following command :

sudo systemctl daemon-reload
sudo systemctl enable x11vnc.service

 

Restart the system and do not login.  We will check if this is working…..

Testing the solution !

To check that you can indeed perform a vnc connection to your Ubuntu
Machine, you will try to connect to it using your favourite vncviewer
(we are using TigerVnc) while nobody is connected and just after a
reboot of the machine.

In the vncviewer, you will provide the
ip address or hostname of the machine to connect and the port to be
used.  In our example, he port used is 5900.  If you have set a password to protect your vnc connection, you will be prompted for a password as well.

VncUb14.10_6

Click on Picture for Better Resolution

If everything is ok, you should see the Ubutun login page displayed inside your vncviewer

TigerVnc_U15.04.png

Click on Picture for Better Resolution

 

Final Notes

And voila !  We have sucessfully updated
the instructions on how to have x11vnc run at startup.  As you can see,
since Ubuntu 15.04 is using the Systemd solution, we need to create our
service unit files (x11vnc.service) and register them with systemctl
and we are done.

Pff… the last days I have updated some
of the most popular posts about xrdp, x11vnc and ubuntu 15.04… It’s time
for me to take a break…

Till next time

Project 2013 and Project Server 2013 Training Links

Microsoft
Training Links

 

Project
2013 training for IT pros and developers

General Link: http://technet.microsoft.com/en-us/office/dn756399

Development Links

Administration Links:

 

Quick
Videos from Books 24×7

Books

Project Server Conference 2014

Channel 9 link: http://channel9.msdn.com/events/Project/2014

Interesting Development related
videos (I have not validated any of them yet)

White Papeters

General Information

 

Office 365 – Single Sign-On for SharePoint, Skydrive, CRM, etc. via Smart Links

Office 365 – Single Sign-On for SharePoint, Skydrive, CRM, etc. via Smart Links

 

Synopsis: One of the biggest problems I have seen
with Office 365 is ease in accessibility to all of the Office365
resources.  As pointed out on many of the Microsoft forums, SharePoint,
CRM, Skydrive, etc. do not automatically complete a single-sign on
request when browsing the website.

Problem: When a user
browses https://mydomain.sharepoint.com for example, the user is
prompted to enter in their email address.  What a user expects is that
they should automatically be logged in and see sharepoint when
navigating to https://mydomain.sharepoint.com  Additionally, for
whatever reason, users cannot remember the website address to
https://mydomain.sharepoint.com  Instead, they want to do something like
http://sharepoint.mydomain.com

Solution: Create name branded “fancy URLs” that will complete an idp claim to give the user a true SSO experience.

  • http://owa.mydomain.com
  • http://sharepoint.mydomain.com
  • http://skydrive.mydomain.com
  • http://crm.mydomain.com

Solution:

  1. Open up Internet Explorer
  2. Navigate to https://mydomain.sharepoint.com
    Sign into Office 365
  3. Press F12 to open up the developer tools console (I am running IE
    11, the console looks way different than previous versions of IE)
    Sign into Office 365 - Developer Console
  4. Scroll down and select the icon that looks like a little WiFi antenna
    Sign into Office 365 - Developer Console - Network
  5. Click the green play button
    Sign into Office 365 - Developer Console - Network - Start Capture
  6. Type in your email address as you would to login to sharepoint (myusername@mydomain.com)
  7. You should be redirected to your ADFS server and inside the network
    console, you should see a link like
    https://sts.mydomain.com/adfs/ls/?………………  Copy this link into notepad.
    Office 365 - Federated URL
  8. Remove the extra stuff from the debug console
    Before
    Office 365 - Federated URL - Notepad

    After
    Office 365 - Federated URL - Cleaned - Notepad
  9. Remove everything from cbcxt=….. to wa=wsignin1.0
    Office 365 - Federated URL - cbcxt removed
  10. Remove the ct%3D1386214464%26 and bk%3D1386214464%26 parameters
    Office 365 - Federated URL - ct and bk removed
  11. Next, open up another new notepad document named index.html and paste the following text into it
    1. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
      "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head>
      <title>CRM</title>
      <meta http-equiv=”refresh” content=”0; url=https://sts.mydomain.com link goes here” /></head>

      <body>

      </body>
      </html>
      Redirect to URL template

  12. Replace https://sts.mydomain.com link goes here with your new smart link and save the document.
    Redirect to federated URL
  13. Upload the index.html file to one of your your webservers
  14. Create a new A record called sharepoint.mydomain.com pointing to your webserver
  15. Now when a user browses http://sharepoint.mydomain.com, the user
    will automatically be redirected to your secure ADFS Proxy and
    authenticate automatically.

You will need to repeat the steps above for each of the Office 365
products your company uses.  The federated addresses do change, so you
will have to follow all of the steps over again for each Smart Link you
wish to create.

NOTES:
Here is an official article on creating smart links: http://community.office365.com/en-us/wikis/sso/using-smart-links-or-idp-initiated-authentication-with-office-365.aspx

How To: Set up Time Machine for Multiple Macs on FreeNAS (9.2.1.3)

This is copied from here

FreeNAS
is awesome. Also, FreeNAS is hard… I recently switched from a
Synology device, and while I am already appreciating the increase in
functionality and power, it’s certainly not as easy to do some basic
tasks. One of those tasks is setting up a Time Machine share where all
of my household Macs can back up. Between reading the tutorials and
giving some trial and error myself, I think I have come up with a good
solution.
And before I get started with the step by step guide, let me reiterate
one thing: Permissions, Permissions, Permissions! If you ever find
yourself banging your head against a wall because something in FreeNAS
isn’t working as you expect it to, the likely culprit is permissions.
Once you wrap your brain around them, though, things become more simple.
Hopefully this guide helps put a foundation around that.

The Default FreeNAS Home Screen

This article assumes that you have FreeNAS already up and running on
your network and that you’re able to connect to the main home screen
with your web browser. I recommend setting it a static IP, as well. Our
first step will be to create a group / user for Time Machine backups.

2

Under the “Account” section on the left, click “Groups,” and then click “Add Group.”

3

You don’t need to change the default value for the group ID, and put
something like “time-machine” for the group name. Leave everything as
default and click OK.

The next step is to create a ZFS dataset where we’re going to put the
Time Machine backups. The dataset must be on a ZFS volume. I’m assuming
you have already created a ZFS volume with your disks here, but if you
haven’t stop reading this guide and go read the FreeNAS ZFS documentation here.
If you have already created the volume, create a dataset. Datasets can
be nested inside of other datasets so I actually have one dataset called
“Backup” and inside of that one, I have one callled “Time-Machine” ~ it
really just depends on how you want things set up.

4

After you enter the name, “Time-Machine”, leave all of the default
values alone. The below screenshot shows how I have “Time-Machine”
nested inside of my Backup dataset.

5

So now we have a dataset. This is going to be where all of our Time
Machine backups get saved. The next step is the most important and the
one that has bitten me before… so don’t forget it. We need to change
the permissions on the “Time-Machine” dataset. Recall that we initially
created a group called “time-machine” – we are now going to set things
up such that any user in the “time-machine” group can write to the
“Time-Machine” dataset. Click on the “Time-Machine” dataset and then
click on the icon with a key on it to change its permissions.

6

When you click that, a permissions dialog box will pop up.

7

I chose not to change the default user owner of “root.” However,
definitely change the group owner. In the drop down box, the
“time-machine” group that we previously created should be selectable.
Click that and then make sure to have the boxes checked as I have in the
image above. We want any user in the group to have read / write /
execute privileges.

Click the “Change” button to have the new permissions take effect.
Now it’s time to create a user for the Time Machine backup. I believe it
is best to create a separate user for each computer (and I’ll explain
why at the end of the post) so just create users that reflect that
computer. For example, the user I’m creating is called
“kevinmacbookair.”

8

Once again, you navigate over to the left column to create a new
user. Leave the “User ID” field as the default. Give your username a
simple lowercase name like mine. Uncheck the box about creating a new
primary group for the user. Instead, go to the drop down list and select
“time-machine” in there. In the full name, put a descriptive name. Type
in a password, and then you’re good to go.

So what we’ve done so far is created a group called “time-machine”
which has full access to the “Time Machine” dataset. Next we added a
user that is part of the “time-machine” group. Easy! The last thing we
need to do is create an AFP (Apple Filing Protocol) share that will
broadcast this over the network so your Mac can see it. To do this,
click the “Sharing” link on the column on the far left and click the
button to create a new AFP share.

9

Name your share something you like, and then use the file browser to
make sure that the “Path” is set to the ZFS dataset that we created for
our Time Machine backups. Next, for the “Allow List” and “Read-write
Access” fields, we want to put the group that we created, “time-machine”
~ however, because it’s a group and not a user, we need to put the “@”
symbol in front of it: “@time-machine”. Next make, sure the “Time
Machine” box is checked. Finally, take a look at those check boxes of
privileges and make sure they match what’s listed above. Then click OK.
At this point, we’re done with everything on the FreeNAS system. It’s
now time to set up Time Machine on the Mac!

10

On the Mac, just open up the Time Machine preferences, and if you go
to select a disk, you should find the one we created there! It will ask
you for a username / password, and you want to make sure you enter the
machine-specific one we created in FreeNAS, not your OS X username /
password.

11

You should be golden! If you want to add more than one computer, you
don’t need to add any new AFP shares or anything like that. Just create
new users for each machine, and make sure that each user is part of the
“time-machine” group that we created earlier. The final improvement to
make this work even better would be for us to cap how much space each
computer has to back up. For example, my MacBook Air has 256GB of space,
and anything on my MacBook Air is also on my other machines so I really
wouldn’t want to give it more than 300GB of usable space for historical
backups. Time Machine will automatically delete the older ones if it
runs out of room. On the contrary, my MacBook Pro is loaded up with all
of my important data and I might want to give it 2x the space of its
SSD. Right now there isn’t a great way to do this for multiple Macs in
FreeNAS, but a feature is coming soon that will make it easy! This feature is per-user quotas. This will allow us to specify the maximum amount of space each user is allowed.

I hope this guide was useful!

Lanuching Visual Studio Android Emulator from the command line

Quick command line to launch the Visual Studio Android Emulator for Lollipop

%ProgramFiles(x86)%Microsoft
XDE10.0.1.0xde.exe /sku Android /displayName "VS Emulator 5"
Lollipop (5.0) XXHDPI Phone" /memSize 2048 /diagonalSize 5 /video
"1080x1920" /vhd
"%LOCALAPPDATA%MicrosoftVisualStudioEmulatorAndroidContai‌
​nersLocalDevicesvhd5_Lollipop_(5.0)_XXHDPI_Phoneimage.vhd" /name
"VS Emulator 5-inch Lollipop (5.0) XXHDPI Phone.%USERNAME%" 

Cisco AnyConnect

Cisco AnyConnect is an SSL VPN client that provides reliable and easy-to-deploy encrypted (SSL) network connectivity for Windows.

Typically, the Cisco AnyConnect client would be downloaded from the
VPN site, but the version currently available from that location is not
compatible with current versions of Windows 7 and Windows 8 and will not
function properly due to Microsoft Windows security updates.

Download Link

AnyConnect-3.1.02026.exe (3.9MB)