Super functions are delicious, just you let me show ya

With the advent of the cloud function it looks like we’re coming towards a new era in web development, where the front end comes totally detached and served by a serverless back-end.

This has its positive aspects and its drawbacks, which I’m not going to get into here.. Instead, I’m going to take a quick (ish) sprint around the current offerings from the various companies out there.

The plan

Our scenario is a super connected industry influencer, just like, mooching around the top art popups and bubble tea establishments. Time is precious when you’re prowling around influencing things, so instead of writing a blog post when something catches my eye (boooooring!), I just want to send an SMS with the latest hash-fashion and have a blog post immediately appear on the now hip and trendsome Blogger platform (Kanye uses it).

Across the street, however, is an evil Ad Conglomerate, lets just call them InterOmnitsu. And they see your influencing, and although BoingBoing is pretty great at keeping their content fresh, they need some of your youthful energy at that next pitch for water based caffienated wet wipes. They want to watch your blog posts, and get a copy of them as soon as you post one.

Its an arms race, rivalling the Cuban Shoe Crisis of 1967.


So, to fulfill our scenario, we will be SMSing into Twillio, which will send the message to an Azure Function, which will then query Twitter… Our Azure function will take the results of the twitter query and send it to Google Cloud functions, which will in turn take the content and format it for a post to Blogger and send a message to linkedin notifying my followers of my newest musings.

On the dark side, an Amazon service will monitor the blogger page and when a new post is detected, take a screen grab, save it to cloud storage, then send the screenshot to an email account.

The yoof marketing industry is surely a den of inequity and vice..

To fulfill this task, we will need to set up the following (*cough* this may change)

Microsoft Azure Account

Twillio Demo Account

Twitter Application Account

Google Account

Blogger Account

LinkedIn Account

Amazon AWS account

All the above have 30 day/demo credit offers, and as we’re micro-functioning the whole thing, even if your demo does run out, just create another.

Step One: Twillio to Azure

Note: you will have a lot of API keys and accounts to keep track of, best create a document and keep them safe.

Create an Azure demo account and log into the functions area ( ).

Create an account on Twilio ( )

Get a new number, and create a programmable SMS. Point the Request URL to your Azure Function URL and set the dropdown to POST

SMS the number with a word, and you should see the request come into your function in your Azure log. Thats a big POST!

Add the following code, to the Azure function javascript and we should be able to text a Twilio number and see only the requested word in the log.

 module.exports = function(context, req) {
 var whatWasMessaged=req.query.Body;
 context.log('Search for a tweet for '+whatWasMessaged);

Oh yeah, create a Twitter app and make a note of all the api keys..

Next, we start with the following code building on it to connect to Twitter once the call from Twilio comes in (which will enter the function as a JSON POST).

module.exports = function(context, req) {
 var tweets=getTweetsForWord(req.query.Body);

function getTweetsForWord(nam){
 return {tweets:[{message:"HAM:"+nam+"HAM:", from:"neilhighley"},
 {message:"JAM:"+nam+"JAM:", from:"cooldude"}]};
function sendToGoogle(pak,context){
 for(var i=0;i<pak.tweets.length;i++){
 context.log(pak.tweets[i].message + " from "+pak.tweets[i].from);

I’ve just created dummy functions so that I can test the connection from Twilio to my function URL and get the meat of the app done as soon as possible.


Create a Twitter App, and note down the APi and customer keys.

We need to use the Twitter API, so we have to have the package installed via node.

Open up the function app settings and navigate to the App Service Editor.


On the left, select the console, so we can install packages, and install the Twitter package.


Add the following to your function and run it.

var Twitter = require('twitter');
var client = new Twitter({
 consumer_key: 'xxxxxxxxxx',
 consumer_secret: 'xxxxxxxxxx',
 access_token_key: 'xxxxxxxxxx',
 access_token_secret: 'xxxxxxxxxx'

Add a “console.log(client);”  to the first function (module.exports).
Observe the console.log  in the monitor section on the left of the App service editor. You should see a huge json object with the twitter client. Otherwise, check the other log next to the function code which should say the error coming from Twitter.

Now that we have a connection to Twitter, we can connect our Azure Function to Twilio so that an SMS is sent to the twitter API.

var Twitter = require('twitter');

var client = new Twitter({
 consumer_key: 'xxxxxxxxxx',
 consumer_secret: 'xxxxxxxxxx',
 access_token_key: 'xxxxxxxxxx',
 access_token_secret: 'xxxxxxxxxx'

var tweet_count=3;
module.exports = function(context, req) {
 var tweets=getTweetsForWord(req.query.Body,context);

function getTweetsForWord(nam, context){
 var tweetsRecieved=[];
 var errorReturned={error:'none'};
 client.get('statuses/user_timeline', { screen_name: 'donaldtrump', count: tweet_count }, 
 function(error, tweets, response) {
     if (!error) {
       for(var i=0;i<tweets.length;i++){
        var thisTweet=tweets[i];
         this.tweetsRecieved.push(    {tweet_text:thisTweet.text,
     else {
 var ret= {tweets:this.tweetsRecieved};
 return ret;
function sendToGoogle(pak,context){
   //just gonna test for now
   for(var i=0;i<pak.tweets.length;i++){
      context.log(pak.tweets[i].tweet_text + " from "+pak.tweets[i].id);

Now we have Twilio sending a post to Azure Functions, which calls Twitter, and formats a JSON object ready for sending to Google/Blogger..

next time.. hopefully…

ASP.Net Core on Cloud9

This is going to be the first of a series of .Net Core posts, seeing as they’ve started to stabilise with all the name things and things..

With Cloud9IDE being purchased recently by Amazon, it seems that Amazon may have great, Heroku flavoured, plans. These plans will probably revolve around micro-site / micro-service development, which is something core hopes to be at the vanguard of.

Installing .net core on Cloud9 is simply installing it on Ubuntu 14, which at the time of writing is the current LTS version of Ubuntu. When cloud9 moves to Ubuntu 16, I can’t imagine the process outlined below to be any different, apart from the repo change.

Firstly, log into Cloud9, and create a new Ubuntu Workspace.

At the command prompt, update the apt library;

  sudo apt-get update

then add the capability for apt to get resources from https.

  sudo apt-get install apt-transport-https

Now, follow the instructions on the .net core page ( )

Note: The instructions that follow are correct at time of writing, please check with the official page also.

  sudo sh -c 'echo "deb [arch=amd64] trusty main" > /etc/apt/sources.list.d/dotnetdev.list'

  sudo apt-key adv --keyserver --recv-keys 417A0893

  sudo apt-get update

  sudo apt-get install dotnet-dev-1.0.0-preview2-003121

Once installed, create a new folder, create a new app, restore the packages and run it.


  mkdir helloworld

  cd helloworld

  dotnet new

  dotnet restore

  dotnet run

.Net core is a leap forward into the unknown for Microsoft, and they hope that they get the jump on the incoming micro-web technologies. They have a steep hill to climb to get to the node ecosystem, but it also has its weaknesses. So, good luck Redmond. 🙂

Using Visual Studio Code to develop cross platform mobile applications using Apache Cordova

Well, the Microsoft OpenSourceSoftware love-in continues with the latest release from the tools team of TACO for Visual Studio Code. What this means is that you can now create Mobile Apps for Android and Windows (and IOS) using Visual Studio Code.

There are certain pre-requisites of course, having NodeJS installed, then installing Cordova as global.

If you haven’t installed Visual Studio, you may need to install the Windows Phone SDK.

If you want to support IOS, you’ll need an OSX machine, whether it is a Macbook, Macbook Air, or just a Mac Mini, it doesn’t really matter.

If you want to build to Android, ensure you have the Android SDK saved on the same computer you will be developing on. If you want to build for Windows Phone, well done, you’re doing the world a solid #breaktheapplefanboymonopolybydevelopingonawindowsphoneforfunandprofit.

Having said all that, go ahead and create a new folder for your project. Let’s just call it something relevant like “applesux”

Open up Visual Studio Code, and open up your folder.

Next, install the command palette by going via the menu, View>Command Palette or by pressing Ctrl-Shift-P then typing “install” and selecting Extensions. Then when the extension list appears, type “Cordova” and you should see the Cordova Tools by Visual Studio Mobile Tools.

Install the Cordova Extension

Select Cordova Extensions

Restart Visual Studio code, when prompted, then develop a mobile app.

Well, have fun and take care!

Oh, how funny am I!

Anyway, as long as Cordova and NodeJS were installed earlier, you’re ready to add your platforms.

Let’s start with the main platform, Windows Phone.

Open up a terminal console in the same “applesux” folder, and add a sample app for cordova by entering

cordova create . com.applesux.hello HelloWorld

You should see the Cordova files in the folder, so now it’s time to pick the intelligent platform of choice Windows 10 8 :\

Cordova platform add wp8

You should now be able to build to the windows phone!

To do that, you need to first set up the environment, by clicking on the debug icon on the left, then clicking the cog to get to the settings. Select the Cordova environment and make a cup of tea.

Having drinked the tea, maybe have a biscuit and another cup of tea. You’ll need the bathroom now, so don’t forget to wash those hands.

Being rehydrated, and calm, we can get on to building for windows phone. You’ll notice that in the Cordova config there currently (2016-01) is no support to debug on Windows phone (bet you’re glad you had that nice cup of tea), for that you’ll need Visual Studio (community will do).

No support for windows 10 debug, yet
No support for windows 10 debug, yet

Keep an eye out on the mobile tools blog for Windows Phone appearing. I’m guessing they’re readying up a windows phone 10 package for Cordova and release everything alongside that.

Instead, lets just “totes ship it (citation : needed)” as they say around the Silicon Roundabout.

Go back to the command prompt which you opened in your project folder, and type

Platforms/wp8/cordova run --device --release

You will now have a .xap package (zap) which you can deploy on your development phone in the /platforms/wp8/bin/release folder.

Search Windows for the “Windows Phone Application Deployment” tool by typing it into the windows 10 search box.

Windows Phone deployment

Browse to the release folder and select your xap. Make sure your dev phone is connected and click “deploy”.

Enjoy the fruits of your labour.

The experience for building Android and IOS apps is a bit better as building and debugging can be done through Visual Studio Code, but Windows 10 support won’t be that long coming, and I’ll update this post accordingly.

Until then, take care.




Setup SSH on a home linux server for remote Node development

Hello again, today I’m going to run through whats required to get a node server running from home.

This may seem like an odd thing to do, but if you do a lot of remote work/hackathons/contract work you may find that the facilities to perform a internet accessible demo are quite lacking.

Firstly, we take our old laptop/micro pc/old pc and install the latest version of Ubuntu (15.10 at time of writing). However, we don’t need the desktop experience so we’ll just install the server installation. You’ll need to do this in front of the machine (although it is possible to roll a SSH enabled distro, but that is far from Quick 😉 ).

After installing Ubuntu and setting a static IP, log in and install openSSH..

Ensure that you follow the instructions in the link below, and alter the listening port to something other than 22 (e.g. 36622)

So, now you should be able to access your ssh prompt via  a local callback:

ssh -v localhost

Lets add node and a simple express application

sudo apt-get install node npm

Once node is installed, create a folder for your server

mkdir nodetest

Then browse to your new folder and initialise node

cd nodetest
npm init

Now add the http module

npm install http -save

(as ever, use sudo if none of this works or chmod/chown your folder)

And add the following code to a new javascript file called quickanddirty.js to create a simple http listener on port 8090

var http = require('http');
var server = http.createServer(function(req,resp){
    resp.end("Welcome to your Node server");
server.listen(8090, function(){
    console.log("Your server has started", 8090);

Test your server out by running node with the javascript file

node quickanddirty.js

You will see that the server has started, and is listening to port 8090. Leave it on as we move to accessing the box remotely.

Note: you can use cURL to check the response also if you are feeling unstoppable 😉

So, to recap, we have an Ubuntu linux box running openSSH and Node. Happy times, happy times.

At this point, as we already assume you have a home broadband connection, we will connect the box to the outside world.

As broadband supplier software differs I’ll try and explain what you need to do both on and away from the box.

Firstly, you need a way of mapping the often shifting IP address of your router with a static dns entry. This is done using a dynamic DNS service such as dynDNS (there are others available, but will generally require installing perl scripts on your linux box to keep the dynamic dns entry up to date).

So, register an account with DynDNS (others are available) and choose a subdoman. Note: Don’t make the name identifiable to yourself..lets not give hackers an easy ride 😉

Once you have your subdomain, you need to create a mechanism to update the dynamic service so calls to the domain get passed to your router IP address.

Both the SKY and virgin broadband devices have areas to select the Dynamic DNS service. Note: Advanced users can configure the dynamic dns update from the linux box

Once it is selected, you’ll enter your account details for the Dynamic DNS service and your router will periodically let DynDNS (or whoever) know the current IP address of your router. This allows you to ssh in on a domain and always get to your router.

Once the dynamic dns is set up you’ll generally need to set up a port forward via the routers firewall from the entry point of your router to the linux server’s openSSH port number (as chosen previously), 36622.

With the Virgin router, you will need to buy another router and put your Virgin box into modem mode, which will simply pass the connection to your other router for dynamic dns, port forwarding and firewall setup. The full instructions for doing this can be found online “virgin wifi modem mode dynamic dns“.

The Sky router is more friendly, with services to set up the port to listen to, then firewall settings to point it to your box.

As I said previously, you don’t need to use DynDNS through the broadband box, just ensure that the port is available and you have a method of updating the Dynamic DNS entry in your provider with your router IP.

The clevererer of you reading will have realised that you don’t need dynamic dns at all if you know the current IP of your router, so as a last resort, you can use that to connect to SSH.

Which leads us to, connecting to your server.

With your server running, hop onto another network, such as your phones, using a different computer and try to connect to your SSH server.

In terminal type the following, taking “nodeuser” as the user created on your linux box, and “” as the dynamic dns entry (you could use the router IP instead also), and the port number of 36622 we chose earlier

ssh -p 36622

You should be able to log in to your server. Verify by browsing to your nodetest folder.

So, we can access your server via openssh, but how can we access the node instance running at 8090. Simples. We tunnel to it.

type “exit” to exit from the openSSH session, then create a new session with added tunneling. To explain how tunneling works in one easy sample, I am going to tunnel into port 8090 on my SSH connection via a local port of 9999.

ssh -p 36622 -L -N

or, if that seems to not work correctly replace the second dynamic domain with your servers actual name.

ssh -p 36622 -L 9999:randomchicken47svr:8090 -N

Now you’ll be able to browse to the localhost port of 9999 in a web browser, and see the response from your Node server via tunneling.

We have used tunneling instead of just opening a port direct to your node port as it increases security. If you’re opening ports for multiple services it increases your attack surface, meaning that an attacker has more things to attack to gain access to your network. Its much safer to have a single fortified SSH accesspoint on a non-standard port.

Be careful, you may get addicted to SSH tunneling, as it can enable you to do some amazing things.. But bear in mind, the tunnel uses your home bandwidth allowance if you have one.

Take care,


Entity Framework – When to use what

In Entity Framework there are three ways of utilising the ORM, one enables you to get straight into coding (code-first), one enables you to rip a database into a new model (database-first) and the final one enables you to create a model independent of the database or your final code (model-first).

But, when should I use each of these methods?

Each method has its pro’s and con’s, and personally, I don’t really use code-first that often as it lends itself to a build where everything has been fully architected beforehand, and all you’re doing is building to spec. Something I rarely encounter, as the initial green field development is often a very agile process, especially if you’re utilising a TDD/BDD development cycle.

So what scenario would you legitimately use Code-First?

Say you have a very small micro-service to build, such as an auditing service, and you already know the database fields, and possibly the service will only know its connection at runtime. Code-First is an ideal solution, as it enables you to quickly knock out the code, leaving the spinning up and implementation of the database to the EF settings in config. The main drawback I find of Code-First is that if you’re database schema is not set in stone, a rebuild of your EF model will necessitate a destruction of the database. You can create a custom upgrade path, but this is rarely done. So, if you have a unchanging model, for a small data footprint, code-first is great.

Code-First is also great for proof-of-concept builds that may be knocked out in a day to show a particular prospect of development.

Database-First is obviously good for where your development is bound to a database schema which already exists and is quite complex. You can just use the EF designer in Visual studio to generate the model and get up and running very quickly. A database schema change will mean that the EF model will need to be recreated, but its generally no big deal as the database will be keeping its data integrity due to it being developed in its own development domain by DBAs or other colleagues.

Model-First would generally be used to map out a logical structure of the data which bridges both the system model and the database model. Say you wish to use a different paradigm of data design in the database to your model (flat-file DB with a relational ORM). It could also be the case that you are tasked with a data-design task where you need to develop a schema that satisfies the requirements of the database team and the architect, utilising a colourful database like Oracle or MySQL to fit.

I hope this helps your decide the approach to use when implementing Entity Framework in your work.

Take care

Moar Oculus hijinks

As part of the #hackcancer hackathon I ignored the urge to build another app, and instead dragged along my lanparty PC and my Oculus and set about creating an interactive game to introduce a conversation about cancer to teens and young children.

The result is NanoDocs (nanodoctors). Below are a few links to the source code and to a video of the work. Enjoy.

he Repo has a video with sound on it also. As soon a I have found a way to record the 2D output I’ll repost it.


Take care


Using Gulp on Windows machines

Doing any sort of Javascript FullStack development on windows seems to be quite niche, so you usually have to spin up a VM with Ubuntu (other distros are available) or remote into a mac.

Well, t is perfectly possible to do Javascript development on windows, even from command line.

Although tools like Webstorm take a lot of the command liney stuff away if you’re working with node you generally have to go there eventually.

In this brief overview I’m going to introduce the concept of a task runner, by using gulp. A task runner is something that runs actions on your code and in the Fullstack world, this generally means concatenating and transpiling.  In this example I’ll show you how to concatenate CSS style files into a single stylesheet file.

So lets get on getting on!

Firstly, install NodeJS.

Create a folder which to host your app, and I’ll be referring to this as root.

Open up a command prompt and then browse to your new folder (or shift right click in the folder 😉 ).

Create a build folder and a src folder within root.

>mkdir build, src

Now, initialise the node package manager

>npm init

This will create a packages file for your local node installation. Adding –save to any npm install will save the corresponding reference to the package.json. adding –save-dev will add to your development dependencies. This allows you to have a set of dependencies you may need for running tests, and another for running the app online if needs be.

Now we create the file which will store the instructions for gulp.

>type NUL > gulpfile.js

Time we installed gulp proper (use –g if you want to install globally)

>npm install gulp –save-dev

At this point, only the packages we need should be installed, so as I am just smooshing up some style files, we only need gulp-concat

>npm install gulp-concat –save-dev

Now we need to open up the gulpset1 file and add the concat dependency.

We can use notepad for this (or install a cmdline editor if you don’t wish to leave the console window (Nano is great for this)).

>notepad gulpfile.js

Add the following line;

var gulp=require('gulp'), concat=require('gulp-concat');

Now we add the gulp task to concatenate the styles files..but we have no files to concat.

So, rustle up three css files called style1.css, style2.cssstyle3.css with some css in there,  and place them in the src folder.

We can create the gulp task to concat them.

Open up the gulpfile.js file in notepad again.

Add the following;

  return gulp.src('src/*.css')

This will set the source for the task to be all the css files in the src folder, pipe them all to concat into style.css then finally move them to their destination in the build folder.

And finally, to run gulp, just call gulp with the task name.

>gulp styles

If you get a not found error at this point, you may need to install gulp and gulp-concat as global (-g).

You’ll now find a concatenated styles file in the build folder called style.css.

Let’s go deeper.. And without any water wings.

Say I want to use LESS rather than css.

I alter my gulpfile to add another task, called style-less

  return gulp.src('src/*.less')

But, we haven’t got the less plugin installed yet.. so it won’t work (chortle).

To get the less compiler to work we need to install the gulp less plugin, then add it to the dependencies in the gulpfile.js.

We also need some more style.less pages. So create three using a variation of the following (careful with the minus sign, ensure you have everything spaced out);

@redcolor_dark:@redcolor - #222;


Run gulp with the style-less task and you should get a style2.css with the compiled and concatenated less created CSS.

The same concept follows using a javascript transpiler.  Pipe the files to the transpiler and put them in the destination.

But wait! I hear you cry! I thought Gulp was automagical!?

Ok, it is and as you insist. here is a brief overview of gulp watch.

You create a new task, call it “style-watcher”, then add a gulp watch instruction inside the task."src/*.less", ['style-less']);

Then run the style-watcher class and go and edit the stylesheet and see the magic.

I’ll let you work out how to watch the normal stylesheet files.

And that is pretty much the fundementals of Gulp. As you can see you can nest tasks, and run tasks within tasks. Gulp, alongside Grunt are the cornerstones of fullstack, node based, development. Grunt runs things a little different however,  namely in the complexity of the config, and the number of plugins due to its maturity.
However, remember all that Grunt and Gulp do is run the actual modules, so technically you don’t need either and you can do just the same with an npm build script. But thats for another day. 🙂

Take care.


Coming soon to your company?


Available from : September 2016

I’m going to update this post with my availability, so if you want to chat, give me a call. My resume and contact details are on the main page at Mention you saw this post and I’ll call right back!

My current role involves mainly desktop development and Industrial Hardware controllers. It’s a little different from what I normally do, but it seemed a good fit for my current interests in IOT and they’re actively using TDD and BDD for their production pipeline.

Online and Offline Software Developer available for C#, HTML, CSS, JavaScript roles from the start of November. I have almost 20 years commercial web experience and consider myself multistack, having worked across the multiple industries in a lot of positions throughout software development and operations.

Blog address:

Github repo:

Main Site:


Daily Rate available on request.

ASP.NET 4 / 4.5 / C# / LINQ / jQuery / Javascript / REST /  XML / Visual Studio 2013 / Resharper / Git / MSSQL  / IIS8 / Windows Server / MSSQL /  CSS3 / HTML / TDD / KnockoutJS / Angular 1.0 / Azure

PHP / WordPress / MySQL

Quick N Dirty – Oculus Rift 0.7 hijinks

Well, having obtained an Oculus in a hackathon the other month, I looked at how easy it is to start developing Unity games.

Pretty easy, thank the gods! (praise the sun)

So, here goes..

(Make sure your Nvidia drivers are up to date and supported: here )

Firstly download and install the 0.7 sdk from here. Set up your Oculus and verify you have the blue light and can see the demo scene.

Then, install Unity 5+ from here, and download the “Oculus Utilities for Unity 5” from here.

Unzip the utilities and copy the package and project folder into the assets folder in Unity3D.

Start Unity and quickly (or slowly, whatever) create a plane and pop on a couple of cubes.


Now, go to the Asset menu, and import the Oculus Utilities Package.


Once it has finished, drag a OVR Player Controller prefab onto the scene and position it above your plane. Make sure the green outline is above the plane.



Now, build your scene using the File>Build or pressing Ctrl+B.

Note: Previously the Unity integration kit used to build two files and you had to do some jiggery pokery with the monitors. Now, no jiggery pokery required and it only builds one executable, but your Oculus knows when its run.

So, with your headset on, double click your executable and navigate around with your game controller.

Have fun!


Adventures in IOT (Part 1 of n)

Well, I have been going a little IOT crazy for the last few months, and have been Hackathoning in pretty much all my spare time, so I’m going to do a few posts on some basic knowledge I have picked up in IOT, namely using Ardunios and Arduino clones on Windows and mac.

Firstly, IOT on the mac, using Arduino created boards is pretty simple as the mac doesn’t have a USB driver layer that windows has. However, using clones and 3rd party chips like the ESP8266 on the mac has proven to be a little bit of a challenge.

I bought a handful of UNO and leonardo clones from xsource and while they work perfectly fine with the standard sketches using bog standard sensors, everything goes awry when clones are used.

This is because a vast majority of the clones use a WC CH340 chip for comms, which the standard arduino pack doesn’t 100% support. Seems fine in windows however, so go figure. And as I am trying to use my mac a bit more (thanks, web dev community), I started taking it to the hackathons so had to get this issue sorted.

Unfortunately, the drivers from the manufacturer are not signed so Yosemite doesn’t trust them. So, you can either piss about with the linux subsystem or download them from someone on the interweb who has packed them into a trusted package. Thanks dude! However… they cost a small fee (€7.47), but its worth it.

Also worth getting is “Serial” which has a ton of drivers cooked in for when you need to dick about with a serial connection. Get it from the app store.

So far, all my clones have worked ok (after driver installing, as above), even though I have been pretty random with my buying just because they mostly come from china and I never know whether they will turn up or not. Generally, the boards have cost around €7 – €10 and have took around two weeks to wing their way across the world.   ESP8266s have cost around €3 – €5 and can be used to make wifi enabled devices. I have hooked these bad boys up to azure, webAPI, heroku (node) and a nice IOT cloud service at which allows you to pipe data directly to a cloud endpoint. You can then create triggers based on the data it receives to get sms alerts, emails, signals sent to other devices and a load of other pretty cool stuff very easy and very quickly.

I’ll be putting some demos up, probably using IOT with azure in the coming days so stay tuned.

Catch you later