Category Archives: Development

Super functions are delicious, just you let me show ya

With the advent of the cloud function it looks like we’re coming towards a new era in web development, where the front end comes totally detached and served by a serverless back-end.

This has its positive aspects and its drawbacks, which I’m not going to get into here.. Instead, I’m going to take a quick (ish) sprint around the current offerings from the various companies out there.

The plan

Our scenario is a super connected industry influencer, just like, mooching around the top art popups and bubble tea establishments. Time is precious when you’re prowling around influencing things, so instead of writing a blog post when something catches my eye (boooooring!), I just want to send an SMS with the latest hash-fashion and have a blog post immediately appear on the now hip and trendsome Blogger platform (Kanye uses it).

Across the street, however, is an evil Ad Conglomerate, lets just call them InterOmnitsu. And they see your influencing, and although BoingBoing is pretty great at keeping their content fresh, they need some of your youthful energy at that next pitch for water based caffienated wet wipes. They want to watch your blog posts, and get a copy of them as soon as you post one.

Its an arms race, rivalling the Cuban Shoe Crisis of 1967.

ahem.

So, to fulfill our scenario, we will be SMSing into Twillio, which will send the message to an Azure Function, which will then query Twitter… Our Azure function will take the results of the twitter query and send it to Google Cloud functions, which will in turn take the content and format it for a post to Blogger and send a message to linkedin notifying my followers of my newest musings.

On the dark side, an Amazon service will monitor the blogger page and when a new post is detected, take a screen grab, save it to cloud storage, then send the screenshot to an email account.

The yoof marketing industry is surely a den of inequity and vice..

To fulfill this task, we will need to set up the following (*cough* this may change)

Microsoft Azure Account

Twillio Demo Account

Twitter Application Account

Google Account

Blogger Account

LinkedIn Account

Amazon AWS account

All the above have 30 day/demo credit offers, and as we’re micro-functioning the whole thing, even if your demo does run out, just create another.

Step One: Twillio to Azure

Note: you will have a lot of API keys and accounts to keep track of, best create a document and keep them safe.

Create an Azure demo account and log into the functions area ( https://portal.azure.com/#create/Microsoft.FunctionApp ).

Create an account on Twilio ( https://www.twilio.com/try-twilio )

Get a new number, and create a programmable SMS. Point the Request URL to your Azure Function URL and set the dropdown to POST

SMS the number with a word, and you should see the request come into your function in your Azure log. Thats a big POST!

Add the following code, to the Azure function javascript and we should be able to text a Twilio number and see only the requested word in the log.

 module.exports = function(context, req) {
 var whatWasMessaged=req.query.Body;
 context.log('Search for a tweet for '+whatWasMessaged);
 context.done();
};

Oh yeah, create a Twitter app and make a note of all the api keys..

Next, we start with the following code building on it to connect to Twitter once the call from Twilio comes in (which will enter the function as a JSON POST).

module.exports = function(context, req) {
 var tweets=getTweetsForWord(req.query.Body);
 sendToGoogle(tweets,context);
 context.done();
};

function getTweetsForWord(nam){
 return {tweets:[{message:"HAM:"+nam+"HAM:", from:"neilhighley"},
 {message:"JAM:"+nam+"JAM:", from:"cooldude"}]};
}
 
function sendToGoogle(pak,context){
 for(var i=0;i<pak.tweets.length;i++){
 context.log(pak.tweets[i].message + " from "+pak.tweets[i].from);
 }
}

I’ve just created dummy functions so that I can test the connection from Twilio to my function URL and get the meat of the app done as soon as possible.

 

Create a Twitter App, and note down the APi and customer keys.

We need to use the Twitter API, so we have to have the package installed via node.

Open up the function app settings and navigate to the App Service Editor.

azure-function-advanced

On the left, select the console, so we can install packages, and install the Twitter package.

azure-scm-install-twitter-npm

Add the following to your function and run it.

var Twitter = require('twitter');
var client = new Twitter({
 consumer_key: 'xxxxxxxxxx',
 consumer_secret: 'xxxxxxxxxx',
 access_token_key: 'xxxxxxxxxx',
 access_token_secret: 'xxxxxxxxxx'
});

Add a “console.log(client);”  to the first function (module.exports).
Observe the console.log  in the monitor section on the left of the App service editor. You should see a huge json object with the twitter client. Otherwise, check the other log next to the function code which should say the error coming from Twitter.

Now that we have a connection to Twitter, we can connect our Azure Function to Twilio so that an SMS is sent to the twitter API.

var Twitter = require('twitter');

var client = new Twitter({
 consumer_key: 'xxxxxxxxxx',
 consumer_secret: 'xxxxxxxxxx',
 access_token_key: 'xxxxxxxxxx',
 access_token_secret: 'xxxxxxxxxx'
}); 

var tweet_count=3;
module.exports = function(context, req) {
 var tweets=getTweetsForWord(req.query.Body,context);
 sendToGoogle(tweets,context);
 context.done();
};

function getTweetsForWord(nam, context){
 var tweetsRecieved=[];
 var errorReturned={error:'none'};
 client.get('statuses/user_timeline', { screen_name: 'donaldtrump', count: tweet_count }, 
 function(error, tweets, response) {
     if (!error) {
       for(var i=0;i<tweets.length;i++){
        var thisTweet=tweets[i];
         this.tweetsRecieved.push(    {tweet_text:thisTweet.text,
 date:thisTweet.created_at,
 id:thisTweet.id});
           }
     }
     else {
      context.log("error");
      errorReturned.error=error;
     }
 });
 var ret= {tweets:this.tweetsRecieved};
 return ret;
}
 
function sendToGoogle(pak,context){
   //just gonna test for now
   for(var i=0;i<pak.tweets.length;i++){
      context.log(pak.tweets[i].tweet_text + " from "+pak.tweets[i].id);
   }
}

Now we have Twilio sending a post to Azure Functions, which calls Twitter, and formats a JSON object ready for sending to Google/Blogger..

next time.. hopefully…

Using Visual Studio Code to develop cross platform mobile applications using Apache Cordova

Well, the Microsoft OpenSourceSoftware love-in continues with the latest release from the tools team of TACO for Visual Studio Code. What this means is that you can now create Mobile Apps for Android and Windows (and IOS) using Visual Studio Code.

There are certain pre-requisites of course, having NodeJS installed, then installing Cordova as global.

If you haven’t installed Visual Studio, you may need to install the Windows Phone SDK.

If you want to support IOS, you’ll need an OSX machine, whether it is a Macbook, Macbook Air, or just a Mac Mini, it doesn’t really matter.

If you want to build to Android, ensure you have the Android SDK saved on the same computer you will be developing on. If you want to build for Windows Phone, well done, you’re doing the world a solid #breaktheapplefanboymonopolybydevelopingonawindowsphoneforfunandprofit.

Having said all that, go ahead and create a new folder for your project. Let’s just call it something relevant like “applesux”

Open up Visual Studio Code, and open up your folder.

Next, install the command palette by going via the menu, View>Command Palette or by pressing Ctrl-Shift-P then typing “install” and selecting Extensions. Then when the extension list appears, type “Cordova” and you should see the Cordova Tools by Visual Studio Mobile Tools.

Install the Cordova Extension

Select Cordova Extensions

Restart Visual Studio code, when prompted, then develop a mobile app.

Well, have fun and take care!

Oh, how funny am I!

Anyway, as long as Cordova and NodeJS were installed earlier, you’re ready to add your platforms.

Let’s start with the main platform, Windows Phone.
https://cordova.apache.org/docs/en/latest/guide/platforms/wp8/index.html

Open up a terminal console in the same “applesux” folder, and add a sample app for cordova by entering

cordova create . com.applesux.hello HelloWorld

You should see the Cordova files in the folder, so now it’s time to pick the intelligent platform of choice Windows 10 8 :\

Cordova platform add wp8

You should now be able to build to the windows phone!

To do that, you need to first set up the environment, by clicking on the debug icon on the left, then clicking the cog to get to the settings. Select the Cordova environment and make a cup of tea.

Having drinked the tea, maybe have a biscuit and another cup of tea. You’ll need the bathroom now, so don’t forget to wash those hands.

Being rehydrated, and calm, we can get on to building for windows phone. You’ll notice that in the Cordova config there currently (2016-01) is no support to debug on Windows phone (bet you’re glad you had that nice cup of tea), for that you’ll need Visual Studio (community will do).

No support for windows 10 debug, yet
No support for windows 10 debug, yet

Keep an eye out on the mobile tools blog for Windows Phone appearing. I’m guessing they’re readying up a windows phone 10 package for Cordova and release everything alongside that.

Instead, lets just “totes ship it (citation : needed)” as they say around the Silicon Roundabout.

Go back to the command prompt which you opened in your project folder, and type

Platforms/wp8/cordova run --device --release

You will now have a .xap package (zap) which you can deploy on your development phone in the /platforms/wp8/bin/release folder.

Search Windows for the “Windows Phone Application Deployment” tool by typing it into the windows 10 search box.

Windows Phone deployment

Browse to the release folder and select your xap. Make sure your dev phone is connected and click “deploy”.

Enjoy the fruits of your labour.

The experience for building Android and IOS apps is a bit better as building and debugging can be done through Visual Studio Code, but Windows 10 support won’t be that long coming, and I’ll update this post accordingly.

Until then, take care.

 

 

 

Setup SSH on a home linux server for remote Node development

Hello again, today I’m going to run through whats required to get a node server running from home.

This may seem like an odd thing to do, but if you do a lot of remote work/hackathons/contract work you may find that the facilities to perform a internet accessible demo are quite lacking.

Firstly, we take our old laptop/micro pc/old pc and install the latest version of Ubuntu (15.10 at time of writing). However, we don’t need the desktop experience so we’ll just install the server installation. You’ll need to do this in front of the machine (although it is possible to roll a SSH enabled distro, but that is far from Quick 😉 ).

After installing Ubuntu and setting a static IP, log in and install openSSH..

Ensure that you follow the instructions in the link below, and alter the listening port to something other than 22 (e.g. 36622)

https://help.ubuntu.com/community/SSH/OpenSSH/Configuring

So, now you should be able to access your ssh prompt via  a local callback:

ssh -v localhost

Lets add node and a simple express application

sudo apt-get install node npm

Once node is installed, create a folder for your server

mkdir nodetest

Then browse to your new folder and initialise node

cd nodetest
npm init

Now add the http module

npm install http -save

(as ever, use sudo if none of this works or chmod/chown your folder)

And add the following code to a new javascript file called quickanddirty.js to create a simple http listener on port 8090

var http = require('http');
var server = http.createServer(function(req,resp){
    resp.end("Welcome to your Node server");
});
server.listen(8090, function(){
    console.log("Your server has started", 8090);
});

Test your server out by running node with the javascript file

node quickanddirty.js

You will see that the server has started, and is listening to port 8090. Leave it on as we move to accessing the box remotely.

Note: you can use cURL to check the response also if you are feeling unstoppable 😉

So, to recap, we have an Ubuntu linux box running openSSH and Node. Happy times, happy times.

At this point, as we already assume you have a home broadband connection, we will connect the box to the outside world.

As broadband supplier software differs I’ll try and explain what you need to do both on and away from the box.

Firstly, you need a way of mapping the often shifting IP address of your router with a static dns entry. This is done using a dynamic DNS service such as dynDNS (there are others available, but will generally require installing perl scripts on your linux box to keep the dynamic dns entry up to date).

So, register an account with DynDNS (others are available) and choose a subdoman. Note: Don’t make the name identifiable to yourself..lets not give hackers an easy ride 😉

Once you have your subdomain, you need to create a mechanism to update the dynamic service so calls to the domain get passed to your router IP address.

Both the SKY and virgin broadband devices have areas to select the Dynamic DNS service. Note: Advanced users can configure the dynamic dns update from the linux box

Once it is selected, you’ll enter your account details for the Dynamic DNS service and your router will periodically let DynDNS (or whoever) know the current IP address of your router. This allows you to ssh in on a domain and always get to your router.

Once the dynamic dns is set up you’ll generally need to set up a port forward via the routers firewall from the entry point of your router to the linux server’s openSSH port number (as chosen previously), 36622.

With the Virgin router, you will need to buy another router and put your Virgin box into modem mode, which will simply pass the connection to your other router for dynamic dns, port forwarding and firewall setup. The full instructions for doing this can be found online “virgin wifi modem mode dynamic dns“.

The Sky router is more friendly, with services to set up the port to listen to, then firewall settings to point it to your box.

As I said previously, you don’t need to use DynDNS through the broadband box, just ensure that the port is available and you have a method of updating the Dynamic DNS entry in your provider with your router IP.

The clevererer of you reading will have realised that you don’t need dynamic dns at all if you know the current IP of your router, so as a last resort, you can use that to connect to SSH.

Which leads us to, connecting to your server.

With your server running, hop onto another network, such as your phones, using a different computer and try to connect to your SSH server.

In terminal type the following, taking “nodeuser” as the user created on your linux box, and “randomchicken47.dyndns.org” as the dynamic dns entry (you could use the router IP instead also), and the port number of 36622 we chose earlier

ssh nodeuser@randomchicken47.dyndns.org -p 36622

You should be able to log in to your server. Verify by browsing to your nodetest folder.

So, we can access your server via openssh, but how can we access the node instance running at 8090. Simples. We tunnel to it.

type “exit” to exit from the openSSH session, then create a new session with added tunneling. To explain how tunneling works in one easy sample, I am going to tunnel into port 8090 on my SSH connection via a local port of 9999.

ssh nodeuser@randomchicken47.dyndns.org -p 36622 -L 9999:randomchicken47.dyndns.org:8090 -N

or, if that seems to not work correctly replace the second dynamic domain with your servers actual name.

ssh nodeuser@randomchicken47.dyndns.org -p 36622 -L 9999:randomchicken47svr:8090 -N

Now you’ll be able to browse to the localhost port of 9999 in a web browser, and see the response from your Node server via tunneling.

We have used tunneling instead of just opening a port direct to your node port as it increases security. If you’re opening ports for multiple services it increases your attack surface, meaning that an attacker has more things to attack to gain access to your network. Its much safer to have a single fortified SSH accesspoint on a non-standard port.

Be careful, you may get addicted to SSH tunneling, as it can enable you to do some amazing things.. But bear in mind, the tunnel uses your home bandwidth allowance if you have one.

Take care,

Neil

Entity Framework – When to use what

In Entity Framework there are three ways of utilising the ORM, one enables you to get straight into coding (code-first), one enables you to rip a database into a new model (database-first) and the final one enables you to create a model independent of the database or your final code (model-first).

But, when should I use each of these methods?

Each method has its pro’s and con’s, and personally, I don’t really use code-first that often as it lends itself to a build where everything has been fully architected beforehand, and all you’re doing is building to spec. Something I rarely encounter, as the initial green field development is often a very agile process, especially if you’re utilising a TDD/BDD development cycle.

So what scenario would you legitimately use Code-First?

Say you have a very small micro-service to build, such as an auditing service, and you already know the database fields, and possibly the service will only know its connection at runtime. Code-First is an ideal solution, as it enables you to quickly knock out the code, leaving the spinning up and implementation of the database to the EF settings in config. The main drawback I find of Code-First is that if you’re database schema is not set in stone, a rebuild of your EF model will necessitate a destruction of the database. You can create a custom upgrade path, but this is rarely done. So, if you have a unchanging model, for a small data footprint, code-first is great.

Code-First is also great for proof-of-concept builds that may be knocked out in a day to show a particular prospect of development.

Database-First is obviously good for where your development is bound to a database schema which already exists and is quite complex. You can just use the EF designer in Visual studio to generate the model and get up and running very quickly. A database schema change will mean that the EF model will need to be recreated, but its generally no big deal as the database will be keeping its data integrity due to it being developed in its own development domain by DBAs or other colleagues.

Model-First would generally be used to map out a logical structure of the data which bridges both the system model and the database model. Say you wish to use a different paradigm of data design in the database to your model (flat-file DB with a relational ORM). It could also be the case that you are tasked with a data-design task where you need to develop a schema that satisfies the requirements of the database team and the architect, utilising a colourful database like Oracle or MySQL to fit.

I hope this helps your decide the approach to use when implementing Entity Framework in your work.

Take care

Coming soon to your company?

 

Available from : September 2016

I’m going to update this post with my availability, so if you want to chat, give me a call. My resume and contact details are on the main page at www.neilhighley.com. Mention you saw this post and I’ll call right back!

My current role involves mainly desktop development and Industrial Hardware controllers. It’s a little different from what I normally do, but it seemed a good fit for my current interests in IOT and they’re actively using TDD and BDD for their production pipeline.

Online and Offline Software Developer available for C#, HTML, CSS, JavaScript roles from the start of November. I have almost 20 years commercial web experience and consider myself multistack, having worked across the multiple industries in a lot of positions throughout software development and operations.

Blog address: http://blog.neilhighley.com

Github repo: http://github.com/neilhighley

Main Site: http://www.neilhighley.com

Devpost: http://devpost.com/neilhighley

Daily Rate available on request.

ASP.NET 4 / 4.5 / C# / LINQ / jQuery / Javascript / REST /  XML / Visual Studio 2013 / Resharper / Git / MSSQL  / IIS8 / Windows Server / MSSQL /  CSS3 / HTML / TDD / KnockoutJS / Angular 1.0 / Azure

PHP / WordPress / MySQL

Adventures in IOT (Part 1 of n)

Well, I have been going a little IOT crazy for the last few months, and have been Hackathoning in pretty much all my spare time, so I’m going to do a few posts on some basic knowledge I have picked up in IOT, namely using Ardunios and Arduino clones on Windows and mac.

Firstly, IOT on the mac, using Arduino created boards is pretty simple as the mac doesn’t have a USB driver layer that windows has. However, using clones and 3rd party chips like the ESP8266 on the mac has proven to be a little bit of a challenge.

I bought a handful of UNO and leonardo clones from xsource and while they work perfectly fine with the standard sketches using bog standard sensors, everything goes awry when clones are used.

This is because a vast majority of the clones use a WC CH340 chip for comms, which the standard arduino pack doesn’t 100% support. Seems fine in windows however, so go figure. And as I am trying to use my mac a bit more (thanks, web dev community), I started taking it to the hackathons so had to get this issue sorted.

Unfortunately, the drivers from the manufacturer are not signed so Yosemite doesn’t trust them. So, you can either piss about with the linux subsystem or download them from someone on the interweb who has packed them into a trusted package. Thanks dude! However… they cost a small fee (€7.47), but its worth it.

https://www.mac-usb-serial.com/dashboard/

Also worth getting is “Serial” which has a ton of drivers cooked in for when you need to dick about with a serial connection. Get it from the app store.

https://itunes.apple.com/gb/app/serial/id877615577?mt=12

So far, all my clones have worked ok (after driver installing, as above), even though I have been pretty random with my buying just because they mostly come from china and I never know whether they will turn up or not. Generally, the boards have cost around €7 – €10 and have took around two weeks to wing their way across the world.   ESP8266s have cost around €3 – €5 and can be used to make wifi enabled devices. I have hooked these bad boys up to azure, webAPI, heroku (node) and a nice IOT cloud service at http://www.smartliving.io which allows you to pipe data directly to a cloud endpoint. You can then create triggers based on the data it receives to get sms alerts, emails, signals sent to other devices and a load of other pretty cool stuff very easy and very quickly.

I’ll be putting some demos up, probably using IOT with azure in the coming days so stay tuned.

Catch you later

Neil

 

Using Flash CC (13) as an asset pipeline when developing with CreateJS

Anyone who knows my history will know that one of my strings was Flash developer.  The browser wars (part 1) made me tire of the hoops and loops and misinformation that used to come out of all the browser manufacturers camps. The graphical splendour of flash in web applications lit the fire of the internet and dragged it away from tables and lists to interactive entertainment.

We all know what happened next, and you are welcome to whatever theory fills your proverbial bucket. Needless to say, I hadn’t touched flash for a few years, with it being persona non grata, in the burgeoning mobile space, and by extension the chugging along web space.

I had used it in the past to export sprite sheets and the like, using it in the same fashion as it was originally designed, as an animation tool, but the HTML5 support was sketchy, as Adobe had nothing to replace the Flash engine with.

It seems Adobe is now in cahoots with Interactive Industry  veteran Grant Skinner to embed createJS, which is his management wrapper over Canvas, DOM and  WebGL, into flash by default. Removing all the messy cartesian maths, trigonometry and graphical hoopla in much the same way jquery changed the DOM for javascript developers.

In this mini tutorial I’m going to show a simple pipeline from asset to createJS. This isn’t meant as anything but an example, and not a template for project work, so use at your own risk.

The tools I will be using are Flash CC (flash 13), a web browser, and a text editor. I’ll show how to create an asset in Flash, publish it to createJS, and then how to interact with it indirectly.

In flash, create a new project using the HTML5 canvas template, with a width of 800 and a height of 600. This is just so we have enough space. It doesn’t relate to the size of our finished piece.
Leave the frame rate at 24fps. Note: Make Sure you keep your framerate noted, or planned out before hand, as we will use it with createJS. Mismatched frame rates can cause animations to run at a different speed than expected.

newflashdoc

I’m going to draw a simple sprite, which has several animations. In this case, a balloon.

Firstly, I create a blank movieclip called balloonClip in the library. Then I add some animations and keyframes with labels.

I’m going to give it a looping 24 frame floating animation and a 10 frame pop. I add labels to the animation which we will be using in createJS.

baloon_anim

The balloonClip object will be made available in the createjs script.

Now, we create a simple webpage to hold the clip.

<!DOCTYPE html>
<html>
<body onload="createJSExample();">
<canvas id="createjsexample"></canvas>
<script>
   function createJSExample() {
       var canvas = document.getElementById("createjsexample");
       var stage = new createjs.Stage(canvas);
       createjs.Ticker.setFPS(24);
       createjs.Ticker.addEventListener("tick", drawLoop);
       function drawLoop() {
           stage.update();
       }
   }
</script>
<script src="https://code.createjs.com/createjs-2014.12.12.min.js"></script>
</body>
</html>

This is a very simple createJS template, with a drawLoop to handle stage updates. Every game or interactive piece has some sort of regular drawLoop or interrupt, which handles state changes in objects, such as a bullet, or cloud, and collision detection, etc.

Now we do a customary run of the page to make sure we have no typos. You can add a console.log to the drawLoop if you want to see it being hit by the tick event.

Back to flash!

We should have an empty screen with our balloon in the library.

Animations in flash will always loop, so we will need to put in redirects in the label layer to ensure the balloon doesn’t just run through to pop.

Add a gotoAndPlay(“float_start”) to the end frame of your floating animation. I’m not going to get into flash development here, I’m going to assume you know what I am talking about when it comes to using flash..

That script will tell flash to go back to the start of the animation and keep looping. We can test it by doing a quick publish.

Flash will convert your animation into createJS and save it in the same folder as your FLA file.

If your animation wasn’t dragged onto the main scene prior to publishing the corresponding javascript file will be fairly empty. If it was on the scene then your code will contain lines similar to this.

// Layer 2
 this.shape = new cjs.Shape();
 this.shape.graphics.f("#FFCC00").s().p("AiTCiQg9hDgBhfQABheA9hDQA+hDBVgBQBXABA9BDQA+BDAABeQAABfg+BDQg9BDhXAAQhVAAg+hDg");
 this.shape.setTransform(-18,-155);

 this.shape_1 = new cjs.Shape();
 this.shape_1.graphics.f("#FFCC00").s().p("AiTCiQg+hDAAhfQAAheA+hEQA9hCBWAAQBWAAA+BCQA9BEABBeQgBBfg9BDQg+BEhWAAQhWAAg9hEg");
 this.shape_1.setTransform(-19.2,-154.3);

The numbers and letters are simply compressed representations of your movieclip’s vectors.

Let’s take the JS file created and place it next to the web page created earlier, then reference it with a script tag.

Now lets add the code to show the balloon to our webpage.

var balloon=new lib.balloonClip();
stage.add(balloon);

The name in the lib object will be the same as what you called your clip in the library in flash.

Add the clip, and the createJS movieclip script to the HTML, so we can see the balloon in all its marvellous glory! 🙂

<!DOCTYPE html>
<html>
<body onload="createJSExample();">
<canvas id="createjsexample"></canvas>
<script>
   function createJSExample() {
       var canvas = document.getElementById("createjsexample");
       canvas.width=800;
       canvas.height=600;
       var stage = new createjs.Stage(canvas);

       var balloon=new lib.balloonClip();
       balloon.x=200;
       balloon.y=250;
       stage.addChild(balloon);

       createjs.Ticker.setFPS(24);
       createjs.Ticker.addEventListener("tick", drawLoop);


       function drawLoop() {
           stage.update();
       }
   }
</script>
<script src="https://code.createjs.com/createjs-2014.12.12.min.js"></script>
<script src="http://code.createjs.com/movieclip-0.7.0.min.js"></script>
<script src="balloonAnimation.js"></script>
</body>
</html>

If all went to plan, you should see your balloon bouncing around on the screen, rendered in canvas by createJS.

Now, we can add a button to pop the balloon.

Add the following code to the web page, just under where you declare the x and y position of the balloon.

var circle = new createjs.Shape();
circle.graphics.beginFill("red").drawCircle(0, 0, 10);
circle.x = circle.y = 10;
circle.name = "circle";
stage.addChild(circle);
//pop you!
circle.on("click",mouseEventHandler);

Now add the listener, under the drawLoop function. This will check to see what frame the movieclip is on, and act accordingly.

function mouseEventHandler(evt){
    if(evt.type=="click"){
        if(balloon.getCurrentLabel()=="pop_end"){
            balloon.gotoAndPlay("float_start");
        }else {
            balloon.gotoAndPlay("pop");
        }
    }
}

If all went well, you should be able to pop, and recreate the balloon using the red dot.

It’s great to see Grant Skinner flying the flag for flash, and taking the best features of Actionscript to create a much better transition to Javascript, and the support for WebGL will only add to the fun times ahead.

Javascript is still nowhere near as fast as flash, but we’re getting there! 🙂

Code is available on my git:

https://github.com/neilhighley/nh-examplecreatejs

 

 

 

 

 

Retrieving records from Apache Cassandra using NodeJS v0.10.35 and ExpressJS v4.0 via a REST interface

The adventures in Cassandra continue, and following on from the last post I’m going to show how to set up a REST interface to retrieve records from Apache Cassandra via Nodejs,

I’ll only set up the GET for a list of items and an individual item, the PUT, POST and DELETE actions I’ll leave to you.

NodeJS is a server based runtime environment for Javascript which uses Googles V8 javascript engine. It has proven to be an extremely fast way of pushing content and has been around for about 5 years now.

It is single threaded, sacrificing session and context specific overheads to maximise throughput.

As with Cassandra it can cope with a great number of requests a second (~ 500rps on 150 concurrent requests), obviously scaling up for processor, and as it is stateless, can be scaled up into round robin farms to cope at massive sizes. (see the Paypal and eBay metrics for more stat-candy).

To achieve a full-javascript stack, node is often used to drive javascript test-runners and compilation programs at the developer side, then used again, alongside a web server, such as ExpressJS to serve server side content back to a javascript application.

NodeJS runs on a variety of flavours of server  and desktop platform, but I’m going to skip the install of nodejs and try to keep this walkthrough as platform agnostic as possible. A common feature of all platforms is the NPM (Node Package Manager), and this is what we’ll be using to install the ExpressJS and Cassandra libraries we will need to serve the content.

Firstly, I’ll create a folder to run this application from, then I’ll open up a command prompt and browse to the folder. Then I install express by entering the following.

npm install express

The folder should now have a node_modules folder, which will contain the express server code and its dependencies.

We will also need to add  a package.json file so that node can verify the app on running.

{
  "name": "nd.neilhighley.com",
  "version": "1.0.0",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "dependencies": {
    "express": "~4.11.1"
  }
}

Next, I create my application folder structure as follows;

>bin
>www
>public
>>javascripts
>routes
>views

Now I add a few javascript files to the following folders;

>www
www.js

adding the following code for the webserver

#!/usr/bin/env node

var app = require('../app');
var http = require('http');

var port = normalizePort(process.env.PORT || '8080');
app.set('port', port);
var server = http.createServer(app);
server.listen(port);
server.on('error', onError);
server.on('listening', onListening);

function normalizePort(val) {
 var port = parseInt(val, 10);

 if (isNaN(port)) {
 // named pipe
 return val;
 }

 if (port >= 0) {
 // port number
 return port;
 }

 return false;
}

function onError(error) {
 if (error.syscall !== 'listen') {
 throw error;
 }

 var bind = typeof port === 'string'
 ? 'Pipe ' + port
 : 'Port ' + port

 // handle specific listen errors with friendly messages
 switch (error.code) {
 case 'EACCES':
 console.error(bind + ' requires elevated privileges');
 process.exit(1);
 break;
 case 'EADDRINUSE':
 console.error(bind + ' is already in use');
 process.exit(1);
 break;
 default:
 throw error;
 }
}

function onListening() {
 var addr = server.address();
 var bind = typeof addr === 'string'
 ? 'pipe ' + addr
 : 'port ' + addr.port;
}

The code above simply sets up the webserver to listen on port 8080,  and sets our application main file as app.js in the folder root.

Before delving into app.js, I need to set up the routes, this will trap any calls to a particular Uri, and pass them to the appropriate codebase.  I’m going to name this after the route I will be using, this is not a requirement, but will help you in future! 😉

>routes
somedata.js
var express = require('express');
var router = express.Router();

/* GET data. */
router.get('/somedata/', function(req, res, next) {
 var jsonToSend={Message:"Here is my resource"};
 res.json(jsonToSend);
});

module.exports = router;

Now, I can set up my app.js file in the folder root to contain the main application logic.

var express = require('express');
var path = require('path');
var routes = require('./routes/somedata');
var app = express();

// view engine setup
//app.set('views', path.join(__dirname, 'views'));
//app.set('view engine', 'jade');

app.use(express.static(path.join(__dirname, 'public')));
app.use('/', routes);

// catch 404 and forward to error handler
app.use(function(req, res, next) {
 var err = new Error('Not Found');
 err.status = 404;
 next(err);
});

app.use(function(err, req, res, next) {
 res.status(err.status || 500);
 res.render('error', {
 message: err.message,
 error: err
 });
});

module.exports = app;

Lets do a quick test by starting the application

npm start

Then browsing to

http://localhost:8080/somedata

If all goes well, we will receive a JSON file as a response.

Now I can alter the routes file to mirror a typical REST interface.

Replace the previous single GET function above with the following

/* GET data. */
router.get('/somedata/', function(req, res, next) {
 var jsonToSend={Message:"Here is my resource"};
 res.json(jsonToSend);
});

router.get('/somedata/:item_id', function(req, res, next) {
 var jsonToSend={Message:"Here is my resource of item "+req.params.item_id};
 res.json(jsonToSend);
});

Confirm the above by calling the following urls and checking the content;

http://localhost:8080/somedata/
http://localhost:8080/somedata/12

The second url should return with the message which has the item_id in it.

Now I have confirmed the framework running, I’ll connect to cassandra and retrieve the records. To do that we just need to replace the jsonToSend with the cassandra response.

So, install the cassandra node client, as follows, in the root of your application.

npm install cassandra-driver

Then to update the package.json, add the cassandra-driver, as installed, so package.json now looks like;

{
 "name": "nd.neilhighley.com",
 "version": "1.0.0",
 "private": true,
 "scripts": {
 "start": "node ./bin/www"
 },
 "dependencies": {
 "express": "~4.11.1",
 "cassandra-driver":"~1.0.2"
 }
}

I’ll use the same keyspace as the last blog post, so have a look there for details on setting up cassandra.

casswcfexample

Update the somedata.json file to the following

var express = require('express');
var router = express.Router();

var cassandra = require('cassandra-driver');
var async = require('async');

var client = new cassandra.Client({contactPoints: ['127.0.0.1'], keyspace: 'casswcfexample'});

function GetRecordsFromDatabase(callback) {
 client.execute("select tid,description,title from exampletable", function (err, result) {
 if (!err){
 if ( result.rows.length > 0 ) {
 var records = result.rows[0];
 callback(1,result.rows);
 } else {
 callback(1,{});
 }
 }else{
 callback(0,{});
 }

 });
 }

function GetRecords(res){
 var callback=function(status,recs){
 if(status!=1){
 res.json({Error:"Error"});
 }else{
 var jsonToSend={Results:recs};
 res.json(jsonToSend);
 }
 };

 GetRecordsFromDatabase(callback);
}

/* GET data. */
router.get('/somedata/', function(req, res, next) {
 GetRecords(res);
});

router.get('/somedata/:item_id', function(req, res, next) {
 var jsonToSend={Message:"Here is my resource from "+req.params.item_id};
 res.json(jsonToSend);
});

module.exports = router;

Now when we call the following url

http://localhost:8080/somedata/

We should receive the following (or similar).

{"Results":[{"tid":1,"description":"description","title":"first title"},{"tid":2,"description":"another description","title":"second title"}]}

If you get an error, it may mean that async and long may need to be installed in your application root also. Install them with the npm.

We can format the JSON returned by altering  the GetRecords function.

The GetRecord function will be almost identical to the GetRecords function, just passing in the Id and changing the CQL.

function GetRecordFromDatabase(passed_item_id,callback) {
 client.execute("select tid,description,title from exampletable where tid="+passed_item_id, function (err, result) {
 if (!err){
 if ( result.rows.length > 0 ) {
 var record = result.rows[0];
 callback(1,record);
 } else {
 callback(1,{});
 }
 }else{
 callback(0,{});
 }

 });
 }

function GetRecord(r_item_id,res){
 var callback=function(status,recs){
 if(status!=1){
 res.json({Error:"Error"});
 }else{
 var jsonToSend={Results:recs};
 res.json(jsonToSend);
 }
 };

 GetRecordFromDatabase(r_item_id,callback);
}

There are a few things that can be done to the above code, including;

– Place all CRUD operations in standard library
– Have middleware fulfill requests to enable client side error catching and cleaner implementation

Hope you have enjoyed the example above. Apologies for any typos, etc. 🙂

Until next time.

 

Connecting Apache Cassandra v2.011 to Windows Communication Foundation

Cassandra is the current de facto standard in flat, scalable, cloud based database applications and as such usually only exists in AWS or linux clusters.
Fortunately, you can install Cassandra on windows in a single node configuration. For more info on the configurations available in a distributed install, see Planet Cassandra.
Previously I have worked on SOLR, which struck me as a similar construct to the node structure of Cassandra, specifically with the lack of master slave that you would normally have found in traditional clusters.

By avoiding a master slave format, Cassandra allows an elastic structure to cover the database records, akin to a RAID striping,  and does a great job when you have geo-spaced clusters allowing for quick responses from the database at a global scale.

Cassandra has several connectors,  covering all the major server side languages, including Javascript(nodejs), Java, Python, Ruby, C#.

Here I will show you how to connect a C# WCF service to Cassandra to retrieve records.

Firstly, install Cassandra, taking special care with prerequisites.
Go into the cassandra query language shell(CQL, and it probably has a python icon on it)..

Create a new keyspace, which allows you to add tables.

create keyspace casswcfexample with replication={'class':'SimpleStrategy', 'replication_factor':1};

Add a new table to your keyspace.

use casswcfexample;
create table exampletable(tid int primary key, title varchar, description varchar);

You can press enter for a new line in CQL, and it will continue, and won’t submit the statement until you add a semicolon

Add data to your keyspace’s exampletable

insert into exampletable(tid, title, description) values(1,'first title', 'description');
insert into exampletable(tid, title, description) values(2,'second title', 'another description');

Verify the data in your keyspace:table

select * from exampletable;

OK, now we have a running cassandra instance with data in it, let’s hook it up to a new, empty, WCF project.

 

As a rule, I always have my data layer as a separate project, to allow me to reuse it.
Create a new Solution, and add a WCF Application, leave it named as WCFService1.
Create a new Class library project and , call it CassLibrary.

In the CassLibrary project, import the DataStax nuget package for the cassandra connector.

Create an object to hold our result rows

namespace CassLibrary
{
 public class DTOExampleTable
 {
 public int Id { get; set; }
 public string Title { get; set; }
 public string Description { get; set; }
 }
}

Create an Extension to convert row to object

namespace CassLibrary
{
 public static class DataExtensions
 {
 public static IEnumerable<DTOExampleTable> ToExampleTables(this RowSet rows)
 {
 return rows.Select(r => new DTOExampleTable()
 {
     Id = int.Parse(r["tid"].ToString()),
     Title = r["title"].ToString(),
     Description = r["description"].ToString()
 });
 } 

 }
}

Create a Simple Data class to hold the session connection and the retrieval of our data

namespace CassLibrary
{
 public class Data
 {
 private string _ks;
 private string _cp;

 public Data(string contactPoint, string keyspace)
 {
 _cp = contactPoint;
 _ks = keyspace;
 }

 private ISession GetSession()
 {
 Cluster cluster = Cluster.Builder().AddContactPoint(_cp).Build();
 ISession session = cluster.Connect(_ks);
 return session;
 }

 public IEnumerable<DTOExampleTable> GetExampleRows()
 {
 var s = GetSession();
 RowSet result = s.Execute("select * from exampletable");
 return result.ToExampleTables();
 }

 }
}

This is just a simple way of showing the data access. The cassandra plugin has a representation of LINQ also, but I’ll not go into it here.

Now, go back to the WCFService1 project, and add a new method to the Service Contract interface.

[ServiceContract]
 public interface IService1
 {
 [OperationContract]
 IEnumerable<DTOExampleTable> GetExampleData();
 }

And in the actual service, add the corresponding method. We’ll add some test data, then verify the service is set up correctly first.

 public class Service1 : IService1
 {
 public IEnumerable<DTOExampleTable> GetExampleData()
 {
 var dto = new List<DTOExampleTable>();

 dto = new List<DTOExampleTable>()
 {
 new DTOExampleTable()
 {
 Id = 999,
 Title = "Just a test for WCF",
 Description = "WCF Desc test"
 },
 new DTOExampleTable()
 {
 Id = 99,
 Title = "Just another test for WCF",
 Description = "WCF Desc test 2"
 },

 };

 return dto;

 } 
 }

Right click the WCF project and start debugging

If the WCFTestClient doesn’t fire up, go to the Visual studio command prompt and type in

WCFTestClient.exe

Then attach your service URL to the client.

Once attached, select the GetExampleData Method in your service and invoke. You should see the two test items we added above.

Now that we know the service is fine, remove the test items from the service method, replacing them with the Data class in your CassLibrary.

 public class Service1 : IService1
 {
 public IEnumerable<DTOExampleTable> GetExampleData()
 {
 return new Data("127.0.0.1", "casswcfexample").GetExampleRows();
 } 
 }

Rebuild your service, and re-invoke it through WCFTestClient. You should see your Cassandra data.

The Cassandra client has DBContext and LINQ built in, so the example above should in no way be utilised in a real-life programming situation.

If i get a few hits on this page, I’ll post a more complete example, showing off the actual speed of Cassandra (a large cluster can serve hundreds of thousand of records a second). Cassandras only real competitor is MongoDB, which seemingly tops out at 20 nodes in a cluster.
e.g.

Its this throughput and stability in scaling that enabled Cassandra to be used in the Large Hadron Collider to store data as it came into one of the detectors.

Planet Cassandra : Companies Using Cassandra