itramblings

Ramblings from an IT manager and long time developer.

By

Using Lets Encrypt to secure cloud-hosted services like Ubiquiti’s mFi, Unifi and Unifi Video

Original post can be found here

Updated Jul 31, 2016: Moved away from letsencrypt-auto and switched to certbot, updated the auto-renewal script, and changed the suggested cron time to weekly. Also made mention that mFi series has been discontinued. Finally, fixed the install instructions for Unifi Video.


Wow – I got myself a free signed SSL cert for my WiFi controller!

Lets Encrypt recently was released and is definitely super interesting. They basically issue SSL Certificates for free. SSL Certs typically would cost hundreds of dollars per domain and even more for Wildcard certificates. It’s insane, it’s essentially an entire industry predecated around artificial pricing for something that is essentially zero cost to generate and maintain. Not to mention holding back security and encryption on the web since not just anyone can afford hundreds of dollars a year for a cert. This entire industry is holding back progress at a massive scale, so we’re going to fix that 🙂

With Lets Encrypt, this is all free now. As cost is no longer a problem, we can encrypt other communication like router config landing pages and other services. No need for self-signed certificates that your browser freaks out about when navigating to. Now we can have real certs!

As I’m a big fan of Ubiquiti products, I’m going to show some examples in this article for how to use Lets Encrypt to generate certificates that are compatible with the mFi automation stuff, Ubiquiti’s Unifi wificontroller and their Unifi Video series for surveillance. Ubiquiti, as they’re an enterprise company, [imo wrongly] expects companies to want to host the backing controller software for these devices on-site. We’re going to host them on EC2 though, so we don’t need to manage servers or have people tripping over power cables. Since we’re on the internet though, we need proper SSL to prevent the NSA and all their shenanigans.


Ubiquiti’s mFi, Unifi wireless and Unifi Video micro camera. They all need a hosted controller.

Note: This article will be specific to configuring Ubiquiti’s services, but the Lets Encrypt instructions are the same regardless of what kind of service you might want.

Note: Also heads up that the mFi line of products has currently been discontinued by Ubiquiti. Instructions here should still work though.

So, lets get started!

Part A: Provisioning an EC2 server w/ Lets Encrypt

Lets Encrypt is somewhat unusual in the way it works. Essentially yes, they give out free certificates, but they need to be renewed every 3 months. Not sure why this is, but my guess is it has something to do with that they’re free. As such, on whatever server you’re using to host your service, you’ll need to have a cronjob that runs Lets Encrypt on that server. Otherwise your cert will expire, and you’re going to have a bad time.

Go to AWS EC2 and create an instance. For Ubiquiti products, I’ve found that even one t2.micromachine can run all three of the servers we’ll be deal with in this artice. If configured right, you might even be able to stay in the AWS Free Tier

  • Type: t2.micro
  • OS: Ubuntu Server 14.04 LTS
  • Storage: ~30GB (maybe more if you’ll be doing a lot of video recording)
  • Ports to open: At least port 443 for the Lets Encrypt verification, but depending on the Ubiquiti service (all TCP unless otherwise specified):
    • mFi: 6080, 6443
    • Unifi: 8081, 8080, 8443, 8880, 8843, 3478 (UDP)
    • Unifi Video: 6666, 7080, 7443, 7445, 7446, 7447

Part B: Install the Ubiquiti services you’d like

You’ll need to add Ubiquiti’s repositories so you can use apt-get to easily install the right services.

  • mFi:
    echo \'deb http://dl.ubnt.com/mfi/distros/deb/ubuntu ubuntu ubiquiti\' | sudo tee -a /etc/apt/sources.list.d/100-ubnt.list
    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv C0A52C50
    sudo apt-get update
    sudo apt-get install mfi

     

  • Unifi:

    # note that you can change stable to unifi5 for v5
    echo \'deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti\' | sudo tee -a /etc/apt/sources.list.d/100-ubnt.list
    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv C0A52C50
    sudo apt-get update
    sudo apt-get install unifi
  • Unifi Video:
    # visit https://community.ubnt.com/t5/UniFi-Video-Blog/bg-p/blog_airVision for the latest version instructions.
    # here\'s version 3.3, though there may be a newer version by now:
    wget http://dl.ubnt.com/firmwares/unifi-video/3.3.0/unifi-video_3.3.0~Debian7_amd64.deb
    sudo dpkg -i unifi-video_3.3.0~Debian7_amd64.deb

After the installation of the packages you want, you should be able to go to the https endpoint to see the page. It’ll be: https://:6443 for mFi, https://:8443 for Unifi, and https://:7443 for Unifi Video.

Problem is, you’re using a self-signed certificate, so your web browser will complain. Next, we’re going to use Lets Encrypt to get a real certificate.


Self-signed cert’s not so hot. ;(

Part C: Generating the signed certificate with Lets Encrypt

Lets install Lets Encrypt now. Reminder that this needs to be done on this server, not your local machine. We’ll be using certbot and essentially the instructions there.

wget https://dl.eff.org/certbot-auto
chmod a x certbot-auto
./certbot-auto

That last line will configure certbot and also install some dependencies.

Now, using certbot, we generate the signed certificate. So lets run the wizard:

./certbot-auto certonly

Select option 2 (to use a temporary webserver), then enter your email (so you get alerts if things go wrong), agree to the agreement, then finally type in your domain name (along with the subdomain). If everything went well you should get a Congratulations message.

Part D: Load the certs into the services

The Ubiquiti services are Java-based and they use the Java Keystore as a way of storing the private keys and certificates. We first need to generate a PKCS #12 certificate from the raw ones we just received:

sudo openssl pkcs12 -export -inkey /etc/letsencrypt/live/mysubdomain.mydomain.com/privkey.pem -in /etc/letsencrypt/live/mysubdomain.mydomain.com/fullchain.pem -out /home/ubuntu/cert.p12 -name ubnt -password pass:temppass

Again, don’t forget to replace mysubdomain.mydomain.com with your domain name. Everything else can remain as-is.

Now for each service you’ll need to load the PKCS #12 certificate into its own keystore.

  • mFi:
    sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/mfi/keystore -srckeystore /home/ubuntu/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias ubnt -noprompt
  • Unifi:
    sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/keystore -srckeystore /home/ubuntu/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias ubnt -noprompt
  • Unifi Video:
    sudo keytool -importkeystore -deststorepass ubiquiti -destkeypass ubiquiti -destkeystore /var/lib/unifi-video/keystore -srckeystore /home/ubuntu/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias ubnt -noprompt

     

Basically all that’s different is the keystore location of the service, and the password Ubiquiti uses to protect it.

Finally, delete the PKCS #12 files (since they’ve already been imported), and restart the services (as appropriate)

sudo rm /home/ubuntu/cert.p12
sudo /etc/init.d/mfi restart
sudo /etc/init.d/unifi restart
sudo /etc/init.d/unifi-video restart

 

That’s basically it! You should go to those same urls as before and you’ll now hopefully have your browser not complaining. 🙂


The browser likes it!

Part E: Automating Lets Encrypt certificate renewal

As mentioned before, Lets Encrypt certificates only last 3 months. As such, we’ll need to get this machine to attempt to renew the certificates probably weekly and then place the new certs back into services. It’s essentially doing parts C and D on a scheduled job using cron. Weekly can seem like a lot, but it’ll fail fast if no renewal is necessary.

Create a new file /home/ubuntu/renew_lets_encrypt_cert.sh and customize it according to what you used in Parts C and D. No sudo needed since cron will run it automatically as a super user. Use full paths to files. Here’s an example:

# Get the certificate from LetsEncrypt
/home/ubuntu/certbot-auto renew --quiet --no-self-upgrade

# Convert cert to PKCS #12 format
openssl pkcs12 -export -inkey /etc/letsencrypt/live/mysubdomain.mydomain.com/privkey.pem -in /etc/letsencrypt/live/mysubdomain.mydomain.com/fullchain.pem -out /home/ubuntu/cert.p12 -name ubnt -password pass:temppass

# Load it into the java keystore that UBNT understands
keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/mfi/keystore -srckeystore /home/ubuntu/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias ubnt -noprompt
keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/keystore -srckeystore /home/ubuntu/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias ubnt -noprompt
keytool -importkeystore -deststorepass ubiquiti -destkeypass ubiquiti -destkeystore /var/lib/unifi-video/keystore -srckeystore /home/ubuntu/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias ubnt -noprompt

# Clean up and use new cert
rm /home/ubuntu/cert.p12
/etc/init.d/mfi restart
/etc/init.d/unifi restart
/etc/init.d/unifi-video restart

Make sure to make this executable:

sudo chmod x /home/ubuntu/renew_lets_encrypt_cert.sh

Lets start modifying the crontab file with sudo crontab -e and put the following at the bottom:

1 1 * * 1 /home/ubuntu/renew_lets_encrypt_cert.sh

This will schedule the certificate renewal every week on Monday at 1:01am

And now you’re really done! You have a free SSL certificate by Lets Encrypt being automatically renewed and assigned to the different services on a monthly basis.

Thanks for reading!

By

AD Domain Join Ubuntu with DNS update

Here are a couple of useful articles to help with this task

Key Steps

 

By

Configure the NTP Server on Windows Server 2016

Copied from https://www.ceos3c.com/2017/07/06/configure-ntp-server-windows-server-2016/

We will use PowerShell to change the NTP Server and we will validate if it worked afterwards.

Configure the NTP Server on Windows Server 2016

On your Windows Server 2016 hit the Windows Button and type: PowerShell and right-click it and select Run as Administrator

Type the following commands

  • w32tm /config /manualpeerlist:pool.ntp.org /syncfromflags:MANUAL
  • Stop-Service w32time
  • Start-Service w32time

Of course, you can take any NTP Server that you want.

Now verify if the time-server was set correctly on your Server 2016 by typing:

  • w32tm /query /status

You should get a reply like this:

Now we will go ahead and verify if our clients sync properly.

 

Verifying if the Time Server was correctly set on our Clients

On a client computer open a PowerShell with right-click and Run as Administrator….

Type:

  • w32tm /query /status

Check the time-server and type:

  • w32tm /resync
  • w32tm /query /status

Then you are able to see that the time-server actually changed like in the example below.

And that’s it! Easy, right?

Always make sure that you use the same time-server in your network and that all your clients are syncing from it. If you have time differences inside of your Active Directory Domain you will run into major issues.

By

Troubleshooting 502 Errors in ARR

Taken from here: https://docs.microsoft.com/en-us/iis/extensions/troubleshooting-application-request-routing/troubleshooting-502-errors-in-arr

Troubleshooting 502 Errors in ARR

by Richard Marr

Tools Used in this Troubleshooter:

  • IIS Failed Request Tracing
  • Network Monitor
  • Winhttp Tracing

This material is provided for informational purposes only. Microsoft makes no warranties, express or implied.

HTTP 502 – Overview

When working with IIS Application Request Routing (ARR) deployments, one of the errors that you may see is “HTTP 502 – Bad Gateway”. The 502.3 error means that – while acting as a proxy – ARR was unable to complete the request to the upstream server and send a response back to the client. This can happen for multiple reasons – for example: failure to connect to the server, no response from the server, or the server took too long to respond (time out). If you are able to reproduce the error by browsing the web farm from the controller, and detailed errors are enabled on the server, you may see an error similar to the following:

Click to Expand

Figure 1 (Click image to expand)

The root cause of the error will determine the actions you should take to resolve the issue.

502.3 Timeout Errors

The error code in the screenshot above is significant because it contains the return code from WinHTTP, which is what ARR uses to proxy the request and identifies the reason for the failure.

You can decode the error code with a tool like err.exe. In this example, the error code maps to ERROR_WINHTTP_TIMEOUT. You can also find this information in the IIS logs for the associated website on the ARR controller. The following is an excerpt from the IIS log entry for the 502.3 error, with most of the fields trimmed for readability:

sc-status sc-substatus sc-win32-status time-taken
502 3 12002 29889

The win32 status 12002 maps to the same ERROR_WINHTTP_TIMEOUT error reported in the error page.

What exactly timed-out?

We investigate this a bit further by enabling Failed Request Tracing on the IIS server. The first thing we can see in the failed request trace log is where the request was sent to in the ARR_SERVER_ROUTED event. The second item I have highlighted is what you can use to track the request on the target server, the X-ARR-LOG-ID. This will help if you are tracing the target or destination of the HTTP request:

77. ARR_SERVER_ROUTED RoutingReason=”LoadBalancing”, Server=”192.168.0.216″, State=”Active”, TotalRequests=”3″, FailedRequests=”2″, CurrentRequests=”1″, BytesSent=”648″, BytesReceived=”0″, ResponseTime=”15225″ 16:50:21.033
78. GENERAL_SET_REQUEST_HEADER HeaderName=”Max-Forwards”, HeaderValue=”10″, Replace=”true” 16:50:21.033
79. GENERAL_SET_REQUEST_HEADER HeaderName=”X-Forwarded-For”, HeaderValue=”192.168.0.204:49247″, Replace=”true” 16:50:21.033
80. GENERAL_SET_REQUEST_HEADER HeaderName=”X-ARR-SSL”, HeaderValue=””, Replace=”true” 16:50:21.033
81. GENERAL_SET_REQUEST_HEADER HeaderName=”X-ARR-ClientCert”, HeaderValue=””, Replace=”true” 16:50:21.033
82. GENERAL_SET_REQUEST_HEADER HeaderName=”X-ARR-LOG-ID”, HeaderValue=”dbf06c50-adb0-4141-8c04-20bc2f193a61″, Replace=”true” 16:50:21.033
83. GENERAL_SET_REQUEST_HEADER HeaderName=”Connection”, HeaderValue=””, Replace=”true” 16:50:21.033

The following example shows how this might look on the target server\’s Failed Request Tracing logs; you can validate that you have found the correct request by matching up the “X-ARR-LOG_ID” values in both traces.

185. GENERAL_REQUEST_HEADERS Headers=”Connection: Keep-Alive Content-Length: 0 Accept: */* Accept-Encoding: gzip, deflate Accept-Language: en-US Host: test Max-Forwards: 10 User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0) X-Original-URL: /time/ X-Forwarded-For: 192.168.0.204:49247 X-ARR-LOG-ID: dbf06c50-adb0-4141-8c04-20bc2f193a61
345. GENERAL_FLUSH_RESPONSE_END BytesSent=”0″, ErrorCode=”An operation was attempted on a nonexistent network connection. (0x800704cd)” 16:51:06.240

In the above example, we can see that the ARR server disconnected before the HTTP response was sent. The timestamp for GENERAL_FLUSH_RESPONSE_END can be used as a rough guide to find the corresponding entry in the IIS logs on the destination server.

date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username sc-status sc-substatus sc-win32-status time-taken
2011-07-18 16:51:06 92.168.0.216 GET /time/ - 80 - 200 0 64 45208

Note that IIS on the destination server logged an HTTP 200 status code, indicating that the request completed successfully. Also note that the win32 status has changed to 64, which maps to ERROR_NETNAME_DELETED. This generally indicates that the client (ARR being the \’client\’ in this case) had disconnected before the request completed.

What happened?

Only the ARR server is reporting a timeout, so that is where we should look first.

In the IIS log entry from the ARR server, we can see that the time-taken is very close to 30 seconds, but the member server log shows that it took 45 seconds (45208 ms) to send the response. This suggests that ARR is timing the request out, and if we check the proxy timeout in the server farm\’s proxy settings, we will see that it is set to 30 seconds by default.

So in this case we can clearly see that the ARR timeout was shorter than the execution of the request. Therefore, you would want to investigate whether this execution time was normal or whether you would need to look at why the request was taking longer than expected. If this execution time was expected and normal, increasing the ARR timeout should resolve the error.

Other possible reasons for ERROR_WINHTTP_TIMEOUT include:

  • ResolveTimeout: This occurs if name resolution takes longer than the specified timeout period.
  • ConnectTimeout: This occurs if it takes longer than the specified timeout period to connect to the server after the name resolved.
  • SendTimeout: If sending a request takes longer than this time-out value, the send operation is canceled.
  • ReceiveTimeout: If a response takes longer than this time-out value, the request is canceled.

Looking at the first two examples, ResolveTimeout and ConnectTimeout, the troubleshooting methodology outlined above would not work. This is because you would not see any traffic on the target server and therefore would not know the error code. Thus in this case of ResolveTimeout or ConnectTimeout you would want to capture a WinHTTP trace for additional insight. See the WinHTTP/WEBIO Tracing section of this troubleshooter as well as the following blogs for additional examples on troubleshooting and tracing:

502.3 Connection Termination Errors

502.3 errors are also returned when the connection between ARR and the member server is disconnected mid-stream. To test this type of problem, create a simple .aspx page that calls Response.Close(). In the following example there is a directory called “time” which is configured with a simple aspx page as the default document of that directory. When browsing to the directory, ARR will display this error:

Click to Expand

Figure 2 (Click image to expand)

The error 0x80072efe corresponds to ERROR_INTERNET_CONNECTION_ABORTED. The request can be traced to the server that actually processed it using the same steps used earlier in this troubleshooter, with one exception; while Failed Request Tracing on the destination server shows the request was processed on the server, the associated log entry does not appear in the IIS logs. Instead, this request is logged in the HTTPERR log as follows:

HTTP/1.1 GET /time/ - 1 Connection_Dropped DefaultAppPool

The built-in logs on the destination server do not provide any additional information about the problem, so the next step would be to gather a network trace from the ARR server. In the example above, the .aspx page called Response.Close() without returning any data. Viewing this in a network trace would show that a Connection: close HTTP header was coming from the destination server. With this information you could now start an investigation into why the Connection: close header was sent.

The error below is another example of an invalid response from the member server:

Click to Expand

Figure 3 (Click image to expand)

In this example, ARR started to receive data from the client but something went wrong while reading the request entity body. This results in the 0x80072f78 error code being returned. To investigate further, use Network Monitor on the member server to get a network trace of the problem. This particular error example was created by calling Response.Close() in the ASP.net page after sending part of the response and then calling Response.Flush(). If the traffic between the ARR server and the member servers is over SSL, then WinHTTP tracing on Windows Server 2008 or WebIO tracing on Windows Server 2008 R2 may provide additional information. WebIO tracing is described later in this troubleshooter.

502.4 No appropriate server could be found to route the request

The HTTP 502.4 error with an associated error code of 0x00000000 generally indicates that all the members of the farm are either offline, or otherwise unreachable.

Click to Expand

Figure 4 (Click image to expand)

The first step is to verify that the member servers are actually online. To check this, go to the “servers” node under the farm in the IIS Manager.

Click to Expand

Figure 5 (Click image to expand)

Servers that are offline can be brought back online by right-clicking on the server name and choosing “Add to Load Balancing”. If you cannot bring the servers back online, verify the member servers are reachable from the ARR server. The “trace Messages” pane on the “servers” page may also provide some clues about the problem. If you are using Web Farm Framework (WFF) 2.0, you may receive this error if the application pool restarts. You will need to restart the Web Farm Service to recover.

WinHTTP/WebIO Tracing

Usually, Network Monitor will provide you with the information you need to identify exactly what is timing out, however there are times (such as when the traffic is SSL encrypted) that you will need to try a different approach. On Windows 7 and Windows Server 2008R2 you can enable WinHTTP tracing using the netsh tool by running the following command from an administrative command prompt:

netsh trace start scenario=internetclient capture=yes persistent=no level=verbose tracefile=c:\temp\net.etl

Then, reproduce the problem. Once the problem is reproduced, stop the tracing by running the following command from the command prompt:

netsh trace stop

The stop command will take a few seconds to finish. When it is done, you will find a net.etl file and a net.cab file in C:\temp. The .cab file contains event logs and additional data that may prove helpful in analyzing the .etl file.

To analyze the log, open it in Netmon 3.4 or later. Make sure you have set up your parser profile as described here. Scroll through the trace until you find the w3wp.exe instance where ARR is running by correlating with the “UT process name” column. Right click on w3wp and choose “Add UT Process name to display filter”. This will set the display filter similar to:

 UTProcessName == "w3wp.exe (1432)

You can further filter the results by changing it to the following:

UTProcessName == "w3wp.exe ()" AND ProtocolName == "WINHTTP_MicrosoftWindowsWinHttp"
You will need to scroll through the output until you find the timeout error. In the example below, a request timed out because it took more than 30 seconds (ARR\'s default timeout) to run.
336 2:32:22 PM 7/22/2011 32.6380453 w3wp.exe (1432) WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:32:23.123 ::sys-recver starts in _INIT state
337 2:32:22 PM 7/22/2011 32.6380489 w3wp.exe (1432) WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:32:23.123 ::current thread is not impersonating
340 2:32:22 PM 7/22/2011 32.6380584 w3wp.exe (1432) WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:32:23.123 ::sys-recver processing WebReceiveHttpResponse completion (error-cdoe = ? (0x5b4), overlapped = 003728F0))
341 2:32:22 PM 7/22/2011 32.6380606 w3wp.exe (1432) WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:32:23.123 ::sys-recver failed to receive headers; error = ? (1460)
342 2:32:22 PM 7/22/2011 32.6380800 w3wp.exe (1432) WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:32:23.123 ::ERROR_WINHTTP_FROM_WIN32 mapped (?) 1460 to (ERROR_WINHTTP_TIMEOUT) 12002
343 2:32:22 PM 7/22/2011 32.6380829 w3wp.exe (1432) WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:32:23.123 ::sys-recver returning ERROR_WINHTTP_TIMEOUT (12002) from RecvResponse()
344 2:32:22 PM 7/22/2011 32.6380862 w3wp.exe (1432) WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:32:23.123 ::sys-req completes recv-headers inline (sync); error = ERROR_WINHTTP_TIMEOUT (12002)

In this next example, the content server was completely offline:

42 2:26:39 PM 7/22/2011 18.9279133 WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:26:39.704 ::WinHttpReceiveResponse(0x11d23d0, 0x0) {WINHTTP_MicrosoftWindowsWinHttp:4, NetEvent:3}
43 2:26:39 PM 7/22/2011 18.9279633 WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:26:39.704 ::sys-recver starts in _INIT state {WINHTTP_MicrosoftWindowsWinHttp:4, NetEvent:3}
44 2:26:39 PM 7/22/2011 18.9280469 WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:26:39.704 ::current thread is not impersonating {WINHTTP_MicrosoftWindowsWinHttp:4, NetEvent:3}
45 2:26:39 PM 7/22/2011 18.9280776 WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:26:39.704 ::sys-recver processing WebReceiveHttpResponse completion (error-cdoe = WSAETIMEDOUT (0x274c), overlapped = 003728F0)) {WINHTTP_MicrosoftWindowsWinHttp:4, NetEvent:3}
46 2:26:39 PM 7/22/2011 18.9280802 WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:26:39.704 ::sys-recver failed to receive headers; error = WSAETIMEDOUT (10060) {WINHTTP_MicrosoftWindowsWinHttp:4, NetEvent:3}
47 2:26:39 PM 7/22/2011 18.9280926 WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:26:39.704 ::ERROR_WINHTTP_FROM_WIN32 mapped (WSAETIMEDOUT) 10060 to (ERROR_WINHTTP_TIMEOUT) 12002 {WINHTTP_MicrosoftWindowsWinHttp:4, NetEvent:3}
48 2:26:39 PM 7/22/2011 18.9280955 WINHTTP_MicrosoftWindowsWinHttp WINHTTP_MicrosoftWindowsWinHttp:12:26:39.704 ::sys-recver returning ERROR_WINHTTP_TIMEOUT (12002) from RecvResponse() {WINHTTP_MicrosoftWindowsWinHttp:4, NetEvent:3}

Other Resources

By

Setting up ASSP, Postfix with SMTP auth for a remote server

By

Exporting Hyper-V Client IP Address Details to a CSV

Took a file to figure out the correct syntax — the following line will export all of the primary IP Addresses along with the name and mac address in a single PS command

Get-VM | ?{$_.ReplicationMode -ne “Replica”} | Select -ExpandProperty NetworkAdapters | ? IPAddresses -ne $null | % { [PSCustomObject]@{ Name=$_.VMName;IPAddress=$_.IPAddresses[0];MacAddress=$_.MacAddress; } }

 

By

The search engine for … Iot, Buildings, Security, etc

Check out https://www.shodan.io/

By

Usefull Docker Scripts

Here are a couple of user docker powershell utility scripts

ConvertFrom-Docker.psm1 – this will take the output from something like

docker ps

and convert it to something like this

docker ps | ConvertFrom-Docker

ContainerId : fbf1955f00a9
Image       : portainer/portainer
Created     : About an hour ago
Status      : Up About an hour
Ports       : 0.0.0.0:9000->9000/tcp
Name        : portainer

Here is the code

function PascalName($name){
    $parts = $name.Split(" ")
    for($i = 0 ; $i -lt $parts.Length ; $i++){
        $parts[$i] = [char]::ToUpper($parts[$i][0]) + $parts[$i].SubString(1).ToLower();
    }
    $parts -join ""
}
function GetHeaderBreak($headerRow, $startPoint=0){
    $i = $startPoint
    while( $i + 1  -lt $headerRow.Length)
    {
        if ($headerRow[$i] -eq ' ' -and $headerRow[$i+1] -eq ' '){
            return $i
            break
        }
        $i += 1
    }
    return -1
}
function GetHeaderNonBreak($headerRow, $startPoint=0){
    $i = $startPoint
    while( $i + 1  -lt $headerRow.Length)
    {
        if ($headerRow[$i] -ne ' '){
            return $i
            break
        }
        $i += 1
    }
    return -1
}
function GetColumnInfo($headerRow){
    $lastIndex = 0
    $i = 0
    while ($i -lt $headerRow.Length){
        $i = GetHeaderBreak $headerRow $lastIndex
        if ($i -lt 0){
            $name = $headerRow.Substring($lastIndex)
            New-Object PSObject -Property @{ HeaderName = $name; Name = PascalName $name; Start=$lastIndex; End=-1}
            break
        } else {
            $name = $headerRow.Substring($lastIndex, $i-$lastIndex)
            $temp = $lastIndex
            $lastIndex = GetHeaderNonBreak $headerRow $i
            New-Object PSObject -Property @{ HeaderName = $name; Name = PascalName $name; Start=$temp; End=$lastIndex}
       }
    }
}
function ParseRow($row, $columnInfo) {
    $values = @{}
    $columnInfo | ForEach-Object {
        if ($_.End -lt 0) {
            $len = $row.Length - $_.Start
        } else {
            $len = $_.End - $_.Start
        }
        $values[$_.Name] = $row.SubString($_.Start, $len).Trim()
    }
    New-Object PSObject -Property $values
}

function ConvertFrom-Docker(){
    begin{
        $positions = $null;
    }
    process {
        if($positions -eq $null) {
            # header row => determine column positions
            $positions  = GetColumnInfo -headerRow $_  #-propertyNames $propertyNames
        } else {
            # data row => output!
            ParseRow -row $_ -columnInfo $positions
        }
    }
    end {
    }
}
  
Export-ModuleMember -Function * -Alias *
# e.g. :
# docker --tls ps -a --no-trunc | ConvertFrom-Docker | ft

.\Docker-Utils.psm1 – this is a set of utility functions and aliases that make life a littler easier.

docke-ps-ext # retrieve docket ps info with IP address

ContainerId : fbf1955f00a9
Image : portainer/portainer
Created : About an hour ago
Status : Up About an hour
Ports : 0.0.0.0:9000->9000/tcp
Name : portainerIPAddress : 172.22.232.8

Here is the code

function Docket-Get-IP() { param ([string] $containerId) return docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $containerId }

function Docker-List-IPAddress { 
  docker ps | ConvertFrom-Docker | % { 	
    $dc = New-Object PSObject -Property @{ ContainerId=""; Image=""; Created=""; Status=""; Ports=""; Name=""; IPAddress="" }
    
    $dc.ContainerId = $_.ContainerId
    $dc.Image = $_.Image
    $dc.Created = $_.Created
    $dc.Status = $_.Status
    $dc.Ports = $_.Ports
    $dc.Name = $_.Names
    $dc.IPAddress = (Docket-Get-IP($_.ContainerId))
    
    $dc | Select ContainerId, Image, Created, Status, Ports, Name, IPAddress
  }
}

Remove-Item docker-ips -ErrorAction SilentlyContinue
Set-Alias -Scope Global -Name docker-ps-ext -Value Docker-List-IPAddress

 

By

Quick computer setup using Chocolatey

Here is a quick script for setting up a new computer using Chocolatey

@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

cinst -y 7zip
cinst -y notepadplusplus
cinst -y notepadreplacer
cinst -y javaruntime
cinst -y spacesniffer
cinst -y sysinternals
cinst -y firefox
cinst -y googlechrome
cinst -y vcredist2010 
cinst -y vcredist2012 
cinst -y vcredist2015 
cinst -y silverlight
cinst -y mremoteng

 

By

Ramblings on blockchain

Some thoughts on what could be done with blockchain

  • Supply Chain/Inventory Validation
    • Blockchain opens up the possibility of building a “trusted supply chain” that is not just dependent on partners reporting in a reactive manner.  Through the use of blockchain it is now possible to validate the who, what, when, where, and how many in a way that cannot be forged.  Consider the idea of recording entry/exit of a “group of items” along with their unique identifiers in a blockchain ledger.  The idea being that now here is an immutable record that traces things down to the item level that can be validated at any time along the way without relying on specific partners data systems.  This could have a huge image on fraud, product loss, “accidental redirection” of inventory, etc.
  • Embedded Software Validation
    • Consider the recent IOT attach on DynDns that took out DNS on the east coast for a few hours.  This could have been prevented if there was a way for embedded systems to not only validate that individual components (dlls, code, etc) are valid but also that the integrity of the entire system state is in a valid state (not compromised).  This could be done by setting up a blockchain that stores enough information for an embedded system to validate that all components are a state that the provide considers “valid” and “update to date”.
  • Intellectual Collaboration
    • Today more and more companies are working together to bring new products to market and are seeing a lot of value from these partnerships.  The hard part is keeping track of who contributed what and who had the first real substantive idea that lead to a product/solution.  Blockchain could be used to setup an “ideation and collaboration” ledger system that would in turn establish a “trusted system of record” for the entities involved in the collaboration.
      Note: This could just as easily be used internally.
  • Licensing/Royalties
    • Blockchain opens up the potential for a self-enforcing contract that can be validated real-time in an automated way.  Consider the music industry (or any industry that licensing something for that matter).  Today these contracts are managed by intermediaries, manually maintained, and “audited” for compliance and payment.  With blockchain you have the ability to not only record a “purchase”, you can also record the “terms” associated with the purchase.  This is then something that can be validated at any point in time allowing systems to be designed to easily and cost effectively report usage (and potentially track incorrect usage through thumbprints).

At the end of the day most of these ideas come down to the idea of leveraging blockchain for managing “contracts” in a way that can be automated and validated in real-time and in a standard and cost effective manner.