volkanpaksoy.com Playground for the mind

Title: Playground for the mind
Description: Volkan Paksoy's blog
volkanpaksoy.com is ranked 1922971 in the world (amongst the 40 million domains). A low-numbered rank means that this website gets lots of visitors. This site is relatively popular among users in the united states. It gets 50% of its traffic from the united states .This site is estimated to be worth $6,228. This site has a low Pagerank(0/10). It has 1 backlinks. volkanpaksoy.com has 43% seo score.

volkanpaksoy.com Information

Website / Domain: volkanpaksoy.com
Website IP Address:
Domain DNS Server: ns-472.awsdns-59.com,ns-1725.awsdns-23.co.uk,ns-1405.awsdns-47.org,ns-513.awsdns-00.net

volkanpaksoy.com Rank

Alexa Rank: 1922971
Google Page Rank: 0/10 (Google Pagerank Has Been Closed)

volkanpaksoy.com Traffic & Earnings

Purchase/Sale Value: $6,228
Daily Revenue: $17
Monthly Revenue $511
Yearly Revenue: $6,228
Daily Unique Visitors 1,570
Monthly Unique Visitors: 47,100
Yearly Unique Visitors: 573,050

volkanpaksoy.com WebSite Httpheader

StatusCode 200
Cache-Control max-age=600
Content-Type text/html; charset=utf-8
Date Tue, 16 Aug 2016 11:12:37 GMT
Server GitHub.com

volkanpaksoy.com Keywords accounting

Keyword Count Percentage

volkanpaksoy.com Traffic Sources Chart

volkanpaksoy.com Similar Website

Domain Site Title

volkanpaksoy.com Alexa Rank History Chart

volkanpaksoy.com aleax

volkanpaksoy.com Html To Plain Text

Playground for the mind Playground for the mind On software development, gadgets, IT security and more about projects archives Volkan Paksoy's blog RSS Twitter GitHub LinkedIn StackOverflow New app on the block: Sleep On It July 15, 2016 ios, swift comments edit I have a drawer full of gadgets that I bought at one point in time with hopes and dreams of magnificent projects and never even touched! Some time ago I started a simple spreadsheet to help myself with the impulse buys. The idea was before I bought something I had to put it to that spreadsheet and it had to wait at least 7 days before I allowed myself to buy it. After 7 days strange things started to happen: In most cases I realised I had lost appetite for that shiny new thing that I once thought was a definite must-have! I kept at listing all the stuff but quickly it started to become hard to wield by just a spreadsheet. Sleep On It The idea behind the app is to automate and “beautify” that process a little bit. It has one Shopping Cart in which the items have waiting periods. It seemed wasteful to me doing nothing during the waiting period. After all it’s not just about dissuading myself from buying new items. I should use that time to make informed decisions about the stuff I’m planning to buy. That’s why I added the product comparison feature. The shopping cart has a limited size. Otherwise you would be able to add anything whenever you think of something to game the system so their waiting period would start (well, at least that’s how my mind works!) if your cart is full you can still add items to the wish list and start reviewing products. It’s basically a backlog of items. This way at least you wouldn’t forget about that thing you saw in your favourite online marketplace. Once you clear up some space in your cart either by waiting to buy or deleting them permanently, you can transfer items from wish list to the cart and officially kick off the waiting period. I have a lot of ideas to improve it but you gotta release at some point and I think it has enough to get me started. Hope anyone else finds it useful too. If you’re interested in the app please contact me. I might be able to hook you up with a promo code. Resources App on iTunes App website Swift Notes - The Basics April 12, 2016 ios, swift comments edit When I started learning Swift for iOS development I also started to compile some notes along the way. This post is the first instalment of my notes. First some basic concepts (in no particular order): REPL Swift supports REPL (Read Evaluate Print Loop) and you can write code and get feedback very quickly this way by using XCode Playground or command line. As seen in the screenshot there is no need to explicitly print the values, they are automatically displayed on the right-hand side of the screen. It can also be executed without specifying the interpreter by adding #!/usr/bin/swift at the top of the file. Comments Swift supports C-style comments like // for single-line comments and /* */ for multi-line comments. The great thing about the multi-line comments is that you can nest them. For example the following is a valid comment: /* This is a /* valid mult-line */ comment that is not available in C# */ Considering how many times Visual Studio punished me while trying to comment out a block of code that had multi-line comments in it, this feature looks fantastic! It also supports doc comments (///) and supports markdown. It even supports emojis (Ctrl + Cmd + Space for the emoji keyboard) Imports Standard libraries are imported automatically but the main frameworks such as Foundation, UIKit need to be imported explicitly. Swift 2 supports a new type of import which is preceded by @testable keyword. @testable import CustomFramework It allows to access non-public members of a class. So that you can access them externally from a unit test project. Before this they all needed to be public in order to be testable. Strings Built-in string type is String. There is also NSString in the Foundation framework. They can be used interchangably sometimes, for example you can assign a String to a NSString but the opposite is not valid. You have cast it explicitly to String first: import Foundation var string : String = "swiftString" var nsString : NSString = "foundationString" nsString = string // Works fine string = nsString as String // Wouldn't work without the cast startIndex is not an int but an object. To get the next character s[s.startIndex.successor()] To get the last character s[s.startIndex.predecessor()] For a specific position s[advance(s.startIndex, 1)] let vs. var Values created with let keyword are immutable. So let is used to create constants. Variables can be created with var keyword. If you create a value let x1 = 7 x1 = 8 // won't compile var x2 = 10 x2 = 11 // this works The same principle applies to arrays: let x3 = [1, 2, 3] x3.append(4) // no go! Type conversion Types are inferred and there is no need to declare them while declaring a variable. let someInt = 10 let someDouble = 10.0 let x = someDouble + Double(someInt) Structs and Classes Structs are value objects and a copy of the value is passed around. Classess are reference objects. Constructors are called initializers and they are special methods named init. Must specify an init method or default values when declaring the class. class Person { var name: String = "" var age: Int = 0 init (name: String, age: Int) { self.name = name self.age = age } } There is no new operator. So declaring a new object looks simply like this: let p = Person() The equivalent of destructor is deinit method. Only classes can have deinitializers. Collections Array: An ordered list of items An empty array xan be declared in a verbose way such as swift var n = Array() or with the shorthand notation swift var n = [Int]() An array with items can be intialized with swift var n = [1, 2, 3] Arrays can be concatenated with += swift n += [4, 5, 6] Items can be added by append method swift n.append(7) Items can be inserted to a specific index swift n.insert(8, atIndex: 3) print(n) // -> "[1, 2, 3, 8, 4, 5, 6, 7]" Items can be deleted by removeAtIndex swift n.removeAtIndex(6) print(n) // -> "[1, 2, 3, 8, 4, 5, 7]" Items can be accessed by their index swift let aNumber = n[2] A range of items can be replaced at once swift var n = [1 ,2, 3, 4] n[1...2] = [5, 6, 7] print(n) // prints [1, 5, 6, 7, 4]" 2-dimensional arrays can be declared as elements as arrays and multiple subscripts can be used to access sub items swift var n = [ [1, 2, 3], [4, 5, 6] ] n[0][1] // value 2 Dictionary: A collection of key-value pairs Can be initialized without items swift var dict = [String:Int]() or with items swift var dict = ["key1": 5, "key2": 3, "key3": 4] To add items, assign a value to a key using subscript syntax swift dict["key4"] = 666 To remove an item, assign nil swift dict["key2"] = nil print(dict) // prints ["key1": 5, "key4": 666, "key3": 4]" To update a value, subscript can be used as adding the item or updateValue method can be called. updateValue returns an optional. If it didn’t update anything the optional has nil in it. So it can be used to check the value was actually updated or not. swift var result = dict.updateValue(45, forKey: "key2") if let r = result { print (dict["key2"]) } else { print ("could not update") // --> This line would be printed } The interesting behaviour is that if it can’t update it, it will add the new value. ```swift var dict = [“key1”:5, “key2”:3, “key3”:4] var result = dict.updateValue(45, forKey: “key4”) if let r = result { print (dict[“key4”]) } else { print (“could not update”) } print(dict) // prints “[“key1”: 5, “key4”: 45, “key2”: 3, “key3”: 4]” // key4 has been added after calling updateValue ``` After a successful update it would return the old value swift result = dict.updateValue(45, forKey: "key1") if let r = result { print (r) // --> This would run and print "5" } else { print ("could not update") } This is consistent with the unsuccessful update returning nil. It always returns the former value. To get a value subscript syntax is used swift var i = dict["key1"] // 45 Set: An unordered list of distinct values Initialization notation is similar to the others swift var emo : Set = [ "??", "??", "??" ] If duplicate items are added it doesn’t throw an error but prunes the list automatically swift var emo : Set = [ "??", "??", "??", "??" ] emo.count // prints 3 New items can be added with insert method swift var emo : Set = [ "??", "??", "??", "??" ] emo.insert("??") emo.insert("??") print(emo) // prints "["??", "??", "??", "??", "??"]" There is no atIndex parameter like array and the index is unpredicatable as shown above Among the three, only arrays have ordered and can have repeated values. Miscellaneous Semi-colons are not required at the end of each line Supports string interpolation Swift uses reference counting and there is garbage collection. Curly braces are required even if there is only one statement inside the body. For instance the following block wouldn’t compile: swift let x = 10 if x == 10 print("Ten!") println function has been renamed to print. print adds a new line to the end automatically. This behaviour can be overriden by explicitly specfying appendNewLine attribute swift print ("Hello, world without a new line", appendNewLine: false) #available can be used to check compatibility swift if #available(iOS 9, *) { // use NSDataAsset } else { // Panic! } Range can be checked with … and ~= operators. For example: ```swift let x = 10 if 1…100 ~= x { print(x) } ``` The variable is on the right in this expression. It wouldn’t compile the other way around. There is Range object that can be used to define, well, ranges! var ageRange = 18...45 print(ageRange) // prints "18..() ?? 0; weight.Date = weightJson["date"].Value(); WeightResultList.Add(weight); } } And the output is: Conclusion So far I managed to collect walking data from MS Band, weight data from Fitbit Aria. In this demo I limited the scope with weight data only but Fitbit API can be used to track sleep, exercise and nutrition. I currently use My Fitness Pal to log what I eat. They too have an API but even though I requested twice they haven’t given me a key yet! Good news is Fitbit has a key and I can get my MFP logs through Fitbit API. I also log my sleep on Fitbit manually so next step is to combine all these in one application to have a nice overview. Resources Source code for demo application Wikipedia: SMART criteria Fitbit API Reference RestSharp POST Body Problems Setup application Playing with Microsoft Band January 18, 2016 c#, development, gadget, band comments edit I bought this about 6 months ago and in this post I’ll talk about my experiences so far. They released version 2 of it in last November so I thought I should write about it before it gets terribly outdated! Choosing the correct size It comes in 3 sizes: Small, Medium and Large and finding the correct size is the first challenge. They seem to have improved the sizing guide for version 2. In the original one they didn’t mention the appropriate size for wrist’s circumference. To have the same effect I followed someone’s advice on a forum regarding the circumferences. Downloaded a printable ruler to measure mine. It was at the border of medium and laarge and I decided to go with medium but even at the largest setting it’s not comfortable and irritates my skin. Most of the time I have to wear it on top of a large plaster Wearing notes I hope they fixed it in v2 but the first generation Band is quite bulky and uncomfortable. To be honest most of the time I just kept wearing it because I had spent £170 and didn’t come to terms with making a terrible investment. I wear it when I’m walking but as soon as I arrive at home or work I take it off because it’s almost impossible to type something with it. Band in action For solely getting fitness data purposes you can use it without pairing with your phone but pairing is helpful as you can read your texts on it, see emails and answer calls. I also installed the Microsoft Health app and started using Microsoft Health dashboard: Troubleshooting As soon as I started using it I noticed a discrepancy with the step count on the Microsoft Health dashboard. Turns out by default it was using phone’s motion tracker as well so it was doubling my steps. After I turned it off started getting the exact same results as on Band. Developing with Band and Cloud API Recording data about something helps tremendously to make it manageable. That’s why I like using these health & fitness gadgets. But of course it doesn’t mean much if you don’t make sense of that data. In my sample application I used Microsoft Health Cloud API to get the Band’s data. In order this to work Band needs to sync with Microsoft Health app on my phone and the app syncs with my MS account. The API has a great guide here that can be downloaded as a PDF. It outlines all the necessary steps very clearly and in detail. Long story short, firstly you need to go to Microsoft Account Developer Center and register an application. This will give you a client ID and client secret that will be used for OAuth 2.0 authentication. API uses OAuth 2.0 authentication. After the token has been acquired, using the actual API is quite simple, in my example app I used /Summaries endpoint to get the daily step counts. Implementation The sample application is a simple WPF desktop application. Upon launch it checks if the user has an access token stored, if not then it shows the OAuth window and the user need to login to their account. To let the user login to their Microsoft account I added a web browser control to a window and navigated to authorization page: string authUri = $"{baseUrl}/oauth20_authorize.srf?client_id={Settings.Default.ClientID}&scope={_scope}&response_type=code&redirect_uri={_redirectUri}"; webBrowser.Navigate(authUri); Once the authorization is complete, the web browser is redirected to with a query parameter code. This is not the actual token we need. Now, we need to go to another URL (oauth20_token.srf) with this code and client secret as parameters and redeem the actual access token: private void webBrowser_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e) { if (e.Uri.Query.Contains("code=") && e.Uri.Query.Contains("lc=")) { string code = e.Uri.Query.Substring(1).Split('&')[0].Split('=')[1]; string authUriRedeem = $"/oauth20_token.srf?client_id={Settings.Default.ClientID}&redirect_uri={_redirectUri}&client_secret={Settings.Default.ClientSecret}&code={code}&grant_type=authorization_code"; var client = new RestClient(baseUrl); var request = new RestRequest(authUriRedeem, Method.GET); var response = (RestResponse)client.Execute(request); var content = response.Content; // Parse content and get auth code Settings.Default.AccessToken = JObject.Parse(content)["access_token"].Value(); Settings.Default.Save(); Close(); } } After we get the authorization out of the way we can actually call the API and get some results. It’s a simple GET call (https://api.microsofthealth.net/v1/me/Summaries/daily) and the response JSON is pretty straightforward. The only thing to keep in mind is to add the access token to Authorization header: request.AddHeader("Authorization", $"bearer {Settings.Default.AccessToken}"); Here’s a sample output for a daily summary: { "userId": "67491ecc-c408-47b6-a3ad-041edb410524", "startTime": "2016-01-18T00:00:00.000+00:00", "endTime": "2016-01-19T00:00:00.000+00:00", "parentDay": "2016-01-18T00:00:00.000+00:00", "isTransitDay": false, "period": "Daily", "duration": "P1D", "stepsTaken": 2784, "caloriesBurnedSummary": { "period": "Daily", "totalCalories": 1119 }, "heartRateSummary": { "period": "Daily", "averageHeartRate": 77, "peakHeartRate": 88, "lowestHeartRate": 68 }, "distanceSummary": { "period": "Daily", "totalDistance": 232468, "totalDistanceOnFoot": 232468 } Since we now have the data, we can visualize it: If you want to play with the sample code don’t forget to register an app and update the settings with your client ID and secret Next I guess the most fun would be to develop something that actually runs on the device. My next goal with my Band is to develop a custom tile using its SDK. I hope I can finish it while a first-gen device is still fairly relevant. Resources Sample project source code MS Band Sizing Guide Online Ruler MS Health API Getting Started guide Microsoft Health Cloud API reference Top 3 AWS Gotchas January 11, 2016 aws, s3, ec2, eip comments edit I’ve been using AWS for a few years now and over the years I noticed there some questions that keep popping up. I was confused by these issues at first and as they look like they are tripping everybody up at some point I decided to compile of a small list of common gotchas. I’ll update this or post another when if I come across more of these. 1. The S3 folder delusion When you AWS console you can create folders to group objects but this is just a delusion deliberately created by AWS to simplify the usage. In reality, S3 has a flat structure and all the objects are on the same level. Here’s the excerpt from AWS documentation that states this fact: In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects. So essentially AWS is just smart enough to recognize the standard folder notation we’ve been using for ages to make this things easier for us. 2. Reserved instance confusion Reserved instances cost less but require some resource planning and paying some money up-front. Although there is now an option to buy reserved instances with no upfront payment they generally shine on long-term commitments with heavy usage (always-on machines). The confusing bit you don’t reserve actual instances. Unfortunately management console doesn’t do a great job in bridging that gap and when you buy a reserved instance you don’t even know which running instance it covers. Basically you just buy a subscription for 1 or 3 years and you pay less for any machine that meets that criteria. For instance, say you reserved 1 Linux t1.small instance for 12 months and you are running 2 t1.small Linux instances at the moment. You will pay reserved instance prices for one of them and on-demand price for the other. From a financial point of view it doesn’t matter which one is which. If you shut down one of those instances, again regardless of the instance, you still pay for reserved instance price as it matches your reserved instance criteria. So that’s all there is to it really. Reserved instance is just about billing and has nothing to do with the actual running instances. 3. Public/Elastic IP uncertainty There are 3 types of IP addresses in AWS: Private IPs are internal IP that every instance are assigned. They remain the same throughout the lifespan of the instance and as the name implies they are not addressable from the Internet. Public IPs are optional. They remain the same as long as the instance is running but they are likely to change after a reboot. So they are not reliable for web-accessible applications. Elastic IPs are basically static IPs that never change. By default AWS gives up to 5 EIPs. If you need more you have to contact their support. They come free of charge as long as they are associated with a running instance. It costs a small amount if you just want to keep them around without using them though. Resources AWS Documentation: Working with Folders Reserved Instances FAQ Amazon EC2 Instance IP Addressing A DIY PDF Reader November 19, 2015 csharp, wpf, syncfusion, pdf comments edit Every few months I have to clean up my desktop computer as dust gets stuck in the CPU fan and it gets hot and slow and loud and annoying! A few days ago I snapped and decided to phase out the desktop and made my laptop as main machine. Even though I love making a fresh start on a new computer it comes with re-installing a bunch of stuff. One missing thing that made itself obvious at the very start was a PDF reader. So far I’ve always been disappointed with PDF viewers. They are too bloated with unnecessary features and they always try to install a browser toolbar or an anti-virus trial. My DIY PDF Reader I started looking into my options to build my own PDF viewer and fortunately didn’t have to look too much. SyncFusion is offering a free license to their products for indie developers and small startups. I used their great wizard control in a past project (Image2PDF) so I first checked if they had something for me. Turns out they have exactly everything I needed wrapped in an easy to use control. Their WPF suite comes with a PdfViewerControl. It supports standard navigation and zooming functions which is pretty much what I need from a PDF viewer. So all I had to do was a start a new WPF project, drag & drop PdfViewerControl and run! The whole XAML code looks like this: And for my 5 minutes, this is the application I got: Conclusion If I need more features in the future I think I’ll just build on this. I always have the open source PDF library iTextSharp which I like quite a lot and now have SyncFusion PDF components and libraries in my arsenal, I have no intention to deal with adware-ridden, bloated applications with lots of security flaws. Resources DIY PDF Viewer Source Code SyncFusion PDF Viewer product page SyncFusion Community License Playing with TFL API with C#, Xamarin and Swift November 13, 2015 csharp, ios, swift, wpf, xamarin, tfl api comments edit Recently I discovered that Transport for London (TFL) has some great APIs that I can play around with some familiar data. It’s very to use as an API key is not even mandatory. My main goal here is to discover what I can do with this data and build a few user interfaces consuming it. All source code is available on GitHub Tube status The API endpoint I will use returns the current status of tube lines, an array of the following JSON objects: { "$type": "Tfl.Api.Presentation.Entities.Line, Tfl.Api.Presentation.Entities", "id": "central", "name": "Central", "modeName": "tube", "created": "2015-10-14T10:31:00.39", "modified": "2015-10-14T10:31:00.39", "lineStatuses": [ { "$type": "Tfl.Api.Presentation.Entities.LineStatus, Tfl.Api.Presentation.Entities", "id": 0, "statusSeverity": 10, "statusSeverityDescription": "Good Service", "created": "0001-01-01T00:00:00", "validityPeriods": [] } ], "routeSections": [], "serviceTypes": [ { "$type": "Tfl.Api.Presentation.Entities.LineServiceTypeInfo, Tfl.Api.Presentation.Entities", "name": "Regular", "uri": "/Line/Route?ids=Central&serviceTypes=Regular" } ] } Visualizing the data - line colours TFL have standard colours for tube lines which are documented here. So I created a small lookup json using that reference: [ { "id": "bakerloo", "CMYK": { "M": 58, "Y": 100, "K": 33 }, "RGB": { "R": 137, "G": 78, "B": 36 } }, { "id": "central", "CMYK": { "M": 95, "Y": 100 }, "RGB": { "R": 220, "G": 36, "B": 31 } }, { "id": "circle", "CMYK": { "M": 16, "Y": 100 }, "RGB": { "R": 255, "G": 206, "B": 0 } }, { "id": "district", "CMYK": { "C": 95, "Y": 100, "K": 27 }, "RGB": { "R": 0, "G": 114, "B": 41 } }, { "id": "hammersmith-city", "CMYK": { "M": 45, "Y": 10 }, "RGB": { "R": 215, "G": 153, "B": 175 } }, { "id": "jubilee", "CMYK": { "C": 5, "K": 45 }, "RGB": { "R": 134, "G": 143, "B": 152 } }, { "id": "metropolitan", "CMYK": { "C": 5, "M": 100, "K": 40 }, "RGB": { "R": 117, "G": 16, "B": 86 } }, { "id": "northern", "CMYK": { "K": 100 }, "RGB": { "R": 0, "G": 0, "B": 0 } }, { "id": "piccadilly", "CMYK": { "C": 100, "M": 88, "K": 5 }, "RGB": { "R": 0, "G": 25, "B": 168 } }, { "id": "victoria", "CMYK": { "C": 85, "M": 19 }, "RGB": { "R": 0, "G": 160, "B": 226 } }, { "id": "waterloo-city", "CMYK": { "C": 47, "Y": 32 }, "RGB": { "R": 118, "G": 208, "B": 189 } } ] I was hoping to map status values to colours as well (i.e. “Severe delays” to red) but there is no official guide to that. The status codes and values can be retrieved from this endpoint: https://api.tfl.gov.uk/line/meta/severity which returns a collection of objects like this: { "$type": "Tfl.Api.Presentation.Entities.StatusSeverity, Tfl.Api.Presentation.Entities", "modeName": "tube", "severityLevel": 2, "description": "Suspended" } I simplified it for my purposes (just the values for tube): [ { "severityLevel": 0, "description": "Special Service" }, { "severityLevel": 1, "description": "Closed" }, { "severityLevel": 2, "description": "Suspended" }, { "severityLevel": 3, "description": "Part Suspended" }, { "severityLevel": 4, "description": "Planned Closure" }, { "severityLevel": 5, "description": "Part Closure" }, { "severityLevel": 6, "description": "Severe Delays" }, { "severityLevel": 7, "description": "Reduced Service" }, { "severityLevel": 8, "description": "Bus Service" }, { "severityLevel": 9, "description": "Minor Delays" }, { "severityLevel": 10, "description": "Good Service" }, { "severityLevel": 11, "description": "Part Closed" }, { "severityLevel": 12, "description": "Exist Only" }, { "severityLevel": 13, "description": "No Step Free Access" }, { "severityLevel": 14, "description": "Change of frequency" }, { "severityLevel": 15, "description": "Diverted" }, { "severityLevel": 16, "description": "Not Running" }, { "severityLevel": 17, "description": "Issues Reported" }, { "severityLevel": 18, "description": "No Issues" }, { "severityLevel": 19, "description": "Information" }, { "severityLevel": 20, "description": "Service Closed" }, ] I will keep it around but in this initial version I won’t use it as description is returned with the status query anyway. But it was still a useful exercise to figure out there is no “official” colour for status values. After all what’s the colour of “No Step Free Access” or “Exist Only”? There is a also reason field that explains the effects of any delays etc. which should ne displayed along with the severity especially when there are some disruptions in the service. ‘Nuff said about the data! Let’s start building something with it! Core library As I will build several API call to retrieve tube status is encapsulated in the core library which basically has sends the HTTP request, parses the JSON and returns the LineInfo list: public class Fetcher { private readonly string _apiEndPoint = "https://api.tfl.gov.uk/line/mode/tube/status?detail=true"; public List GetTubeInfo() { var client = new RestClient(_apiEndPoint); var request = new RestRequest("/", Method.GET); request.AddHeader("Content-Type", "application/json"); var response = (RestResponse)client.Execute(request); var content = response.Content; var tflResponse = JsonConvert.DeserializeObject>(content); var lineInfoList = tflResponse.Select(t => new LineInfo() { Id = t.id, Name = t.name, Reason = t.lineStatuses[0].reason, StatusSeverityDescription = t.lineStatuses[0].statusSeverityDescription, StatusSeverity = t.lineStatuses[0].statusSeverity }).ToList(); return lineInfoList; } } LineInfo class contains the current status with the description. It also contains the colour defined by TFL for that tube line: public class LineInfo { public string Id { get; set; } public string Name { get; set; } public int StatusSeverity { get; set; } public string StatusSeverityDescription { get; set; } public string Reason { get; set; } public RGB LineColour { get { return TubeColourHelper.GetRGBColour(this.Id); } } } As the line colours aren’t returned by the service I have to populate it by a helper class: public class TubeColourHelper { private static Dictionary _tubeColorRGBDictionary = new Dictionary(); static TubeColourHelper() { _tubeColorRGBDictionary = new Dictionary(); string json = File.ReadAllText("./data/colours.json"); var tubeColors = JArray.Parse(json); foreach (var tubeColor in tubeColors) { _tubeColorRGBDictionary.Add(tubeColor["id"].Value(), new RGB( tubeColor["RGB"]["R"]?.Value() ?? 0, tubeColor["RGB"]["G"]?.Value() ?? 0, tubeColor["RGB"]["B"]?.Value() ?? 0)); } } public static RGB GetRGBColour(string lineId) { if (!_tubeColorRGBDictionary.ContainsKey(lineId)) { throw new ArgumentException($"Colour for line [{lineId}] could not be found in RGB colour map"); } return _tubeColorRGBDictionary[lineId]; } } The static constructor runs only the first time it is accessed, reads the colours.json and populates the dictionary. From then on it’s just a lookup in memory. First client: C# Console Application on Windows Time to develop our first client and see some actual results. As it’s generally the case with console applications this one is pretty simple and hassle-free. I decided to start with that one just to see the core library is working as expected. class Program { static void Main(string[] args) { var fetcher = new Fetcher(); var viewer = new ConsoleViewer(); bool exit = false; viewer.DisplayTubeStatus(fetcher.GetTubeInfo()); do { ConsoleKeyInfo key = System.Console.ReadKey(); switch (key.Key) { case ConsoleKey.F5: viewer.DisplayTubeStatus(fetcher.GetTubeInfo()); break; case ConsoleKey.Q: exit = true; break; default: System.Console.WriteLine("Unknown command"); break; } } while (!exit); } } Displays the results when it’s first run. You can refresh by pressing F5 or quit by pressing Q. The output looks like this: The problem with console application is that I wasn’t able to use RGB values directly as the console only supports an enumeration called ConsoleColor. Second client: WPF Application on Windows Now let’s look at a more graphical UI, a WPF client: Same idea, display the results upon first run then call the service again on Refresh button’s click event. Third client: iOS App with Xamarin I’ve recently subscribed to Xamarin and one of the main reasons for starting this project was to see it in action. What I was mostly curious about was if I could use my C# libraries using NuGet packages on an iOS application developed with Xamarin. This would allow me build apps significantly faster. It didn’t work out of the box because I used C# 6.0 and .NET Framework 4.5.2 on the Windows side but it wasn’t available on the Mac. But wasn’t too hard to change the framework and make some small modifications to make it work. Good news is that it supports NuGet and most common libraries have Mono support including RestSharp and Newtonsoft.Json which I used in this project I had to remove and add them but finally they worked fine so I didn’t have to change anything in the code. I won’t go into implementation details as there’s not much change. The app has one table view controller and it calls the core library to get the results and assigns them to the table’s data source. It’s a relief that I could have the same functionality as Windows with just minor changes. public override void ViewDidLoad() { base.ViewDidLoad(); var fetcher = new Fetcher(); var lineInfoList = fetcher.GetTubeInfo(); TableView.Source = new TubeStatusTableViewControllerSource(lineInfoList.ToArray()); TableView.ReloadData(); } Anyway, more on Xamarin later after I cover the Swift version. Fourth client: iOS App with Swift Last but not least, here comes Swift client built with XCode. Naturally this one cannot use the core library that the first 3 clients shared (which is good because I was looking for a chance to practice handling HTTP requests and parsinng JSON with Swift anyway). I didn’t use any external libraries so the implementation is a bit long but mainly it sends the request using NSURLSession and NSSessionDataTask. func getTubeStatus(completionHandler: (result: [LineInfo]?, error: NSError?) -> Void) { let parameters = ["detail" : "true"] let mutableMethod : String = Methods.TubeStatus taskForGETMethod(mutableMethod, parameters: parameters) { JSONResult, error in if let error = error { completionHandler(result: nil, error: error) } else { if let results = JSONResult as? [AnyObject] { let lineStatus = LineInfo.lineStatusFromResults(results) completionHandler(result: lineStatus, error: nil) } } } } Then constructs the LineInfo objects by calling the static lineStatusFromResults method: static func lineStatusFromResults(results: [AnyObject]) -> [LineInfo] { var lineStatus = [LineInfo]() for result in results { lineStatus.append(LineInfo(status: result)) } return lineStatus } which creates a new LineInfo and adds to resultset: init(status: AnyObject) { Id = status["id"] as! String Name = status["name"] as! String StatusSeverity = status["lineStatuses"]!![0]!["statusSeverity"] as! Int StatusSeverityDescription = status["lineStatuses"]!![0]!["statusSeverityDescription"] as! String LineColour = RGB(R: 0, G: 0, B: 0) } JSON parsing is a bit nasy because of unwrapping the optionals. I’ll look into SwiftyJSON later on which is a popular JSON library for Swift. Finally the controller displays the results: override func viewWillAppear(animated: Bool) { super.viewWillAppear(animated) TFLClient.sharedInstance().getTubeStatus { lineStatus, error in if let lineStatus = lineStatus { self.lineInfoList = lineStatus dispatch_async(dispatch_get_main_queue()) { self.tableView!.reloadData() } } else { print(error) } } } And the custom cells are created when data is loaded and the text and colours are set: override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCellWithIdentifier("TubeInfoCell", forIndexPath: indexPath) as! TubeInfoTableViewCell let lineStatus = lineInfoList[indexPath.row] cell.backgroundColor = colourHelper.getTubeColor(lineStatus.Id) cell.lineName?.text = lineStatus.Name cell.lineName?.textColor = UIColor.whiteColor() cell.severityDescription?.text = lineStatus.StatusSeverityDescription cell.severityDescription?.textColor = UIColor.whiteColor() return cell } And here’s the output: Xamarin vs Swift Here’s a quick overview and comparison of both platforms based on my (limited) experiences with this toy project: XCode is much faster when building and deploying XamarinStudio doesn’t seem to be very intuitive at times. For examples the code snippets use Java-notation The more I use Swift the more I like it and it doesn’t slow me down terribly. Once you get used to it the difference is syntax more or less. For example the following two methods do the same thing: Xamarin: csharp public override nint RowsInSection (UITableView tableview, nint section) { return lineInfoList.Length; } Swift: swift override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return lineInfoList.count } I can even argue I’d be more comfortable with the Swift version here as I have no idea what a “nint” is as it’s an input parameter in the Xamarin version. The idea behind Xamarin subscription was to develop iOS apps quickly as I’m a seasoned C# developer and feel comfortable with it. But turns it, I can’t move as fast as I expected. With the Indie subscription you can only use Xamarin Studio. Enabling Visual Studio is only allowed with the business version which costs $1000/year. And Xamarin Studio is a brand new IDE for me so it definitely has a learning curve. Also I’m getting used to XCode now (besides the fact that it crashes hundred times a day on average!) Conclusion This was just a reconnaisance mission to explore the TFL API, iOS development with Xamarin and Swift. It was a fun exercise for me, I hope anyone who reads this can benefit from it too. Resources TubeStatusFetcher source code TFL API Docs TFL Colour Standards Building a simple HTTP server with Nancy November 11, 2015 csharp, nancy comments edit Recently I needed to simulate HTTP responses from a 3rd party. I decided to use Nancy to quickly build a local web server that would handle my test requests and return the responses I wanted. Here’s the definition of Nancy from their official website: Nancy is a lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono. It can handle DELETE, GET, HEAD, OPTIONS, POST, PUT and PATCH requests. It’s very easy to customize and extend as it’s module-based. In order to build our tiny web server we are going to need self-hosting package: Install-Package Nancy.Hosting.Self This would automatically install Nancy as it depends on that package. Self-hosting in action The container application can be anything as long it keeps running one way or another. A background service would be ideal for this task. Since all I need is testing I just created a console application and added Console.ReadKey() statement to keep it “alive” class Program { private string _url = "http://localhost"; private int _port = 12345; private NancyHost _nancy; public Program() { var uri = new Uri( $"{_url}:{_port}/"); _nancy = new NancyHost(uri); } private void Start() { _nancy.Start(); Console.WriteLine($"Started listennig port {_port}"); Console.ReadKey(); _nancy.Stop(); } static void Main(string[] args) { var p = new Program(); p.Start(); } } If you try this code, it’s likely that you’ll have an error (AutomaticUrlReservationCreationFailureException) saying: The Nancy self host was unable to start, as no namespace reservation existed for the provided url(s). Please either enable UrlReservations.CreateAutomatically on the HostConfiguration provided to the NancyHost, or create the reservations manually with the (elevated) command(s): netsh http add urlacl url="http://+:12345/" user="Everyone" There are 3 ways to resolve this issue and two of which are already suggested in the error message: In an elevated command prompt (fancy way of saying run as administrator!), run netsh http add urlacl url="http://+:12345/" user="Everyone" What add urlacl does is Reserves the specified URL for non-administrator users and accounts If you want to delete it later on you can use the following command netsh http delete urlacl url=http://+:12345/ Specify a host configuration to NancyHost like this: ```csharp var configuration = new HostConfiguration() { UrlReservations = new UrlReservations() { CreateAutomatically = true } }; _nancy = new NancyHost(configuration, uri); ``` This essentially does the same thing and a UAC prompt pops up so it’s not that automatical! Run the Visual Studio (and the standalone application when deployed) as administrator After applying either one of the 3 solutions, let’s run the application and try the address http://localhost:12345 in a browser and we get … Excellent! We are actually getting a response from the server even though it’s just a 404 error. Now let’s add some functionality, otherwise it isn’t terribly useful. Handling requests Requests are handled by modules. Creating a module is as simple as creating a class deriving from NancyModule. Let’s create two handlers for the root, one for GET verbs and one for POST: public class SimpleModule : Nancy.NancyModule { public SimpleModule() { Get["/"] = _ => "Received GET request"; Post["/"] = _ => "Received POST request"; } } Nancy automatically discovers all modules so we don’t have to register them. If there are conflicting handlers the last one discovered overrides the previous ones. For example the following example would work fine and the second GET handler will be executed: public class SimpleModule : Nancy.NancyModule { public SimpleModule() { Get["/"] = _ => "Received GET request"; Post["/"] = _ => "Received POST request"; Get["/"] = _ => "Let me have the request!"; } } Working with input data: Request parameters In the simple we used underscore to represent input as didn’t care but most of the time we would. In that case we can get the request parameters as a DynamicDictionary (a type that comes with Nancy). For example let’s create a route for /user: public SimpleModule() { Get["/user/{id}"] = parameters => { if (((int)parameters.id) == 666) { return $"All hail user #{parameters.id}! \\m/"; } else { return "Just a regular user!"; } }; } And send the GET request: GET http://localhost:12345/user/666 HTTP/1.1 User-Agent: Fiddler Host: localhost:12345 Content-Length: 2 which would return the response: HTTP/1.1 200 OK Content-Type: text/html Server: Microsoft-HTTPAPI/2.0 Date: Tue, 10 Nov 2015 11:40:08 GMT Content-Length: 23 All hail user #666! \m/ Working with input data: Request body Now let’s try to handle the data posted in the request body. Data posted in the body can be accessed though this.Request.Body property such as for the following request POST http://localhost:12345/ HTTP/1.1 User-Agent: Fiddler Host: localhost:12345 Content-Length: 55 Content-Type: application/json { "username": "volkan", "isAdmin": "sure!" } this code would first convert the request stream to a string and deserialize it to a POCO: Post["/"] = _ => { var id = this.Request.Body; var length = this.Request.Body.Length; var data = new byte[length]; id.Read(data, 0, (int)length); var body = System.Text.Encoding.Default.GetString(data); var request = JsonConvert.DeserializeObject(body); return 200; }; If the was posted from a form for example and sent in the following format in the body username=volkan&isAdmin=sure! then we could simply convert it to a dictionary with a little bit of LINQ: Post["/"] = parameters => { var id = this.Request.Body; long length = this.Request.Body.Length; byte[] data = new byte[length]; id.Read(data, 0, (int)length); string body = System.Text.Encoding.Default.GetString(data); var p = body.Split('&') .Select(s => s.Split('=')) .ToDictionary(k => k.ElementAt(0), v => v.ElementAt(1)); if (p["username"] == "volkan") return "awesome!"; else return "meh!"; }; This is nice but it’s a lot of work to read the whole and manually deserialize it! Fortunately Nancy supports model binding. First we need to add the using statement as the Bind extension method lives in Nancy.ModelBinding using Nancy.ModelBinding; Now we can simplify the code by the help of model binding: Post["/"] = _ => { var request = this.Bind(); return request.username; }; The important thing to note is to send the data with the appropriate content type. For the form data example the request should be like this: POST http://localhost:12345/ HTTP/1.1 User-Agent: Fiddler Host: localhost:12345 Content-Length: 29 Content-Type: application/x-www-form-urlencoded username=volkan&isAdmin=sure! It also works for binding JSON to the same POCO. Preparing responses Nancy is very flexible in terms of responses. As shown in the above examples you can return a string Post["/"] = _ => { return "This is a valid response"; }; which would yield this HTTP message on the wire: HTTP/1.1 200 OK Content-Type: text/html Server: Microsoft-HTTPAPI/2.0 Date: Tue, 10 Nov 2015 15:48:12 GMT Content-Length: 20 This is a valid response Response code is set to 200 - OK automatically and the text is sent in the response body. We can just set the code and return a response with a simple one-liner: Post["/"] = _ => 405; which would produce: HTTP/1.1 405 Method Not Allowed Content-Type: text/html Server: Microsoft-HTTPAPI/2.0 Date: Tue, 10 Nov 2015 15:51:36 GMT Content-Length: 0 To prepare more complex responses with headers and everything we can construct a new Response object like this: Post["/"] = _ => { string jsonString = "{ username: \"admin\", password: \"just kidding\" }"; byte[] jsonBytes = Encoding.UTF8.GetBytes(jsonString); return new Response() { StatusCode = HttpStatusCode.OK, ContentType = "application/json", ReasonPhrase = "Because why not!", Headers = new Dictionary() { { "Content-Type", "application/json" }, { "X-Custom-Header", "Sup?" } }, Contents = c => c.Write(jsonBytes, 0, jsonBytes.Length) }; }; and we would get this at the other end of the line: HTTP/1.1 200 Because why not! Content-Type: application/json Server: Microsoft-HTTPAPI/2.0 X-Custom-Header: Sup? Date: Tue, 10 Nov 2015 16:09:19 GMT Content-Length: 47 { username: "admin", password: "just kidding" } Response also comes with a lot of useful methods like AsJson, AsXml and AsRedirect. For example we could simplify returning a JSON response like this: Post["/"] = _ => { return Response.AsJson( new SimpleResponse() { Status = "A-OK!", ErrorCode = 1, Description = "All systems are go!" }); }; and the result would contain the appropriate header and status code: HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 Server: Microsoft-HTTPAPI/2.0 Date: Tue, 10 Nov 2015 16:19:18 GMT Content-Length: 68 {"status":"A-OK!","errorCode":1,"description":"All systems are go!"} One extension I like is the AsRedirect method. The following example would return Google search results for a given parameter: Get["/search"] = parameters => { string s = this.Request.Query["q"]; return Response.AsRedirect($"http://www.google.com/search?q={s}"); }; HTTPS What if we needed to support HTTPS for our tests for some reason? Fear not, Nancy covers that too. By default, if we just try to use HTTPS by changing the protocol we would get this exception: The connection to ‘localhost’ failed. System.Security.SecurityException Failed to negotiate HTTPS connection with server.fiddler.network.https HTTPS handshake to localhost (for #2) failed. System.IO.IOException Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. The solution is to add create a self-signed certificate and add it using netsh http add command. Here’s the step-by-step process: Create a self-signed certificate: Open a Visual Studio command prompt and enter the following command: makecert nancy.cer You can provide more properties so that it would look with a name that makes sense. Here’s an MSD page to Run mmc and add Certificates snap-in. Make sure to select Computer Account. I selected My User Account at first and it gave the following error: SSL Certificate add failed, Error: 1312 A specified logon session does not exist. It may already have been terminated. In that case the solution is just to drag and drop the certificate to the computer account as shown below: Right-click on Certificates (Local Computer) -> Personal -> Certificates and select All tasks -> Import and browse to nancy.cer file created in Step 1 Double-click on the certificate, switch to Details tab and scroll to the bottom and copy the Thumbprint value (and remove the spaces after copied it) Now enter the following commands. The first one is the same as before, just with HTTPS as protocol. The second command add the certificate we’ve just created. netsh http add urlacl url=https://+:12345/ user="Everyone" netsh http add sslcert ipport= ccerthash=653a1c60d4daaae00b2a103f242eac965ca21bec appid={A0DEC7A4-CF28-42FD-9B85-AFFDDD4FDD0F} clientcertnegotiation=enable Here appid can be any GUID. Let’s take it out for a test drive: Get["/"] = parameters => { return "Response over HTTPS! Weeee!"; }; This request GET https://localhost:12345 HTTP/1.1 Host: localhost:12345 returns this response HTTP/1.1 200 OK Content-Type: text/html Server: Microsoft-HTTPAPI/2.0 Date: Wed, 11 Nov 2015 10:24:58 GMT Content-Length: 27 Response over HTTPS! Weeee! Conclusion There are a few alternatives when you need a small web server to test something locally. Nancy is one of them. It’s easy to configure, use and it’s lightweight. Apparently you can even host in on a Raspberry Pi! Resources Official Nancy site GitHub repository Windows Dev Centre reference for add urlacl MSDN Reference for Makecert Accessing the client certificate when using SSL Blogpost: Enabling SSL for Self-Hosted Nancy Running Nancy on your Raspberry Pi Building Plugin-Based Applications with Managed Extensibility Framework (MEF) October 22, 2015 csharp, design, development, mef, managed extensibility framework comments edit In this post I will try to cover some of the basic concepts and features of MEF over a working example. In the future I’ll post more articles demonstraint MEF usage with more complex applications. Background Many successful and popular applications, such as Visual Studio, Eclipse, Sublime Text, support a plug-in model. Adopting a plugin-based model, whenever possible, has quite a few advantages: Helps to keep the core lightweight instead of cramming all features into the same code-base. Helps to make the application more robust: New functionality can be added without changing any existing code. Helps to make development easier as different modules can be developed by different people simultaneously Allows plugin development without distributing the main the source code Extensibility is based on composition and it is very helpful to build SOLID compliant applications as it adopts Open/closed and Dependency Inversion principles. MEF is part of .NET framework as of version 4.0 and it lives inside System.ComponentModel.Composition namespace. This is also the standard extension model that has been used in Visual Studio. It is not meant to replace Invesion of Control (IoC) frameworks. It is rather meant to simplify building extensible applications using dependency injection based on component composition. Some terminology Before diving into the sample, let’s look at some MEF terminology and core terms: Part: Basic elements in MEF are called parts. Parts can provide services to other parts (exporting) and can consume other parts’ services (importing). Container: This is the part that performs the composition. Most common one is CompositionContainer class. Catalog: In order to discover the parts, containers use catalogs. There are various catelogs suplied by MEF such as AssemblyCatalog: Discovers attributed parts in a managed code assembly. DirectoryCatalog: Discovers attributed parts in the assemblies in a specified directory. AggregateCatalog: A catalog that combines the elements of ComposablePartCatalog objects. ApplicationCatalog: Discovers attributed parts in the dynamic link library (DLL) and EXE files in an application’s directory and path Export / import: The way the plugins make themselves discoverable is by exporting their implementation of a contract. A contract is simply a common interface that the application and the plugins understand so they can speak the same language so to speak. Sample Project As I learn best by playing around, I decided to start with a simple project. I’ve recently published a sample project for Strategy design pattern which I blogged here. In this post I will use the same project and convert it into a plugin-based version. IP Checker with MEF v1: Bare Essentials At this point we have everything we need for the first version of the plugin-based IP checker. Firstly, I divided my project into 5 parts: IPCheckerWithMEF.Lab: The consumer application IPCheckerWithMEF.Contract: Project containing the common interface Plugins: Extensions for the main application IPCheckerWithMEF.Plugins.AwsIPChecker IPCheckerWithMEF.Plugins.CustomIPChecker IPCheckerWithMEF.Plugins.DynDnsIPChecker I set the output folder of the plugins to a directory called Plugins at the project level. Let’s see some code! For this basic version we need 3 things: A container to handle the composition. A catalog that the container can use to discover the plugins. A way to tell which classes should be discovered and imported In this sample I used a DirectoryCatalog that points to the output folder of the plugin projects. So after adding the required parts above the main application shaped up to be something like this: public class MainApplication { private CompositionContainer _container; [ImportMany(typeof(IIpChecker))] public List IpCheckerList { get; set; } public MainApplication(string pluginFolder) { var catalog = new DirectoryCatalog(pluginFolder); _container = new CompositionContainer(catalog); LoadPlugins(); } public void LoadPlugins() { try { _container.ComposeParts(this); } catch (CompositionException compositionException) { Console.WriteLine(compositionException.ToString()); } } } In the constructor, it instantiates a DirectoryCatalog with the given path and passes it to the container. The container imports IIpChecker type objects found in the assemblies inside that folder. Note that we didn’t do anything about IpCheckerList. By decorating it with ImportMany attribute we declared that it’s to be filled by the composition engine. In this example we could only use ImportMany as opposed to Import which would look for a single part to compose. If we used Import we would get the following exception: Now to complete the circle we need to export our plugins with Export attribute such as: [Export(typeof(IIpChecker))] public class AwsIPChecker : IIpChecker { public string GetExternalIp() { // ... } } Alternatively we can use InheritedExport attribute on the interface to export any class that implements the IIpChecker interface. [InheritedExport(typeof(IIpChecker))] public interface IIpChecker { string GetExternalIp(); } This way the plugins would still be discovered even if they weren’t decorated with Export attribute because of this inheritance model. Putting it together Now that we’ve seen the plugins that export the implementation and part that discovers and imports them let’s see them all in action: class Program { static void Main(string[] args) { Console.WriteLine("Starting the main application"); string pluginFolder = @"..\..\..\Plugins\"; var app = new MainApplication(pluginFolder); Console.WriteLine($"{app.IpCheckerList.Count} plugin(s) loaded.."); Console.WriteLine("Executing all plugins..."); foreach (var ipChecker in app.IpCheckerList) { Console.WriteLine(ObfuscateIP(ipChecker.GetExternalIp())); } } private static string ObfuscateIP(string actualIp) { return Regex.Replace(actualIp, "[0-9]", "*"); } } We create the consumer application that loads all the plugins in the directory we specify. Then we can loop over and execute all of them: So far so good. Now, let’s try to export some metadata about our plugins so that we can display the loaded plugins to the user. IP Checker with MEF v2: Metadata comes into play In almost all applications plugins come with some sort of information so that the user can identify which ones have been installed and what they do. To export the extra data let’s add a new interface: public interface IPluginInfo { string DisplayName { get; } string Description { get; } string Version { get; } } And on the plugins we fill that data and export it using the ExportMetadata attribute: [Export(typeof(IIpChecker))] [ExportMetadata("DisplayName", "Custom IP Checker")] [ExportMetadata("Description", "Uses homebrew service developed with Node.js and hosted on Heroku")] [ExportMetadata("Version", "2.1")] public class CustomIpChecker : IIpChecker { // ... } In v1, we only imported a list of objects implementing IIpChecker. So how do we accommodate this new piece of information? In order to do that we have to change the way we import the plugins and use the Lazy construct: [ImportMany] public List> Plugins { get; set; } According to MSDN this is mandatory to get metadata out of plugins: The importing part can use this data to decide which exports to use, or to gather information about an export without having to construct it. For this reason, an import must be lazy to use metadata So let’s load and display this new plugin information: private static void PrintPluginInfo() { Console.WriteLine($"{_app.Plugins.Count} plugin(s) loaded.."); Console.WriteLine("Displaying plugin info..."); Console.WriteLine(); foreach (var ipChecker in _app.Plugins) { Console.WriteLine("----------------------------------------"); Console.WriteLine($"Name: {ipChecker.Metadata.DisplayName}"); Console.WriteLine($"Description: {ipChecker.Metadata.Description}"); Console.WriteLine($"Version: {ipChecker.Metadata.Version}"); } } Notice that we access the metadata through [PluginName].Metadata.[PropertyName] properties. To access the actual plugin and call the exported methods we have to use [PluginName].Value such as: foreach (var ipChecker in _app.Plugins) { ipChecker.Value.GetExternalIp(); } Managing the plugins What if we want to add or remove plugins at runtime? We can do it without restarting the application but refreshing the catalog and calling the container’s ComposeParts method again. In this sample application I added a FileSystemWatcher that listens to the Created and Deleted events on the Plugins folder and calls the LoadPlugins method of the application when an event fires. LoadPlugins first refreshes the catalog and composes the parts: public void LoadPlugins() { try { _catalog.Refresh(); _container.ComposeParts(this); } catch (CompositionException compositionException) { Console.WriteLine(compositionException.ToString()); } } But making this change alone isn’t sufficient and we would end up getting a CompositionException: By default recomposition is disabled so we have to specify it explicitly while importing parts: [ImportMany(AllowRecomposition = true)] public List> Plugins { get; set; } After these changes the final version of composing class looks like this: public class MainApplication { private CompositionContainer _container; private DirectoryCatalog _catalog; [ImportMany(AllowRecomposition = true)] public List> Plugins { get; set; } public MainApplication(string pluginFolder) { _catalog = new DirectoryCatalog(pluginFolder); _container = new CompositionContainer(_catalog); LoadPlugins(); } public void LoadPlugins() { try { _catalog.Refresh(); _container.ComposeParts(this); } catch (CompositionException compositionException) { Console.WriteLine(compositionException.ToString()); } } } and the client app: class Program { private static readonly string _pluginFolder = @"..\..\..\Plugins\"; private static FileSystemWatcher _pluginWatcher; private static MainApplication _app; static void Main(string[] args) { Console.WriteLine("Starting the main application"); _pluginWatcher = new FileSystemWatcher(_pluginFolder); _pluginWatcher.Created += PluginWatcher_FolderUpdated; _pluginWatcher.Deleted += PluginWatcher_FolderUpdated; _pluginWatcher.EnableRaisingEvents = true; _app = new MainApplication(_pluginFolder); PrintPluginInfo(); Console.ReadLine(); } private static void PrintPluginInfo() { Console.WriteLine($"{_app.Plugins.Count} plugin(s) loaded.."); Console.WriteLine("Displaying plugin info..."); Console.WriteLine(); foreach (var ipChecker in _app.Plugins) { Console.WriteLine("----------------------------------------"); Console.WriteLine($"Name: {ipChecker.Metadata.DisplayName}"); Console.WriteLine($"Description: {ipChecker.Metadata.Description}"); Console.WriteLine($"Version: {ipChecker.Metadata.Version}"); } } private static void PluginWatcher_FolderUpdated(object sender, FileSystemEventArgs e) { Console.WriteLine(); Console.WriteLine("===================================="); Console.WriteLine("Folder changed. Reloading plugins..."); Console.WriteLine(); _app.LoadPlugins(); PrintPluginInfo(); } } After these changes I started the application with 2 plugins in the target folder and added a 3rd one while it’s running and got this output: It also works the same way for deleted plugins but not for updates because the assemblies are locked by .NET. Adding new plugins at runtime is painless but removing and updating would require more attention as the plugin might be running at the time. Resources Sample project source code IP Checker with strategy design pattern blog post MSDN: Managed Extensibility Framework (MEF) MSDN: System.ComponentModel.Composition.Hosting Namespace MSDN: Attributed Programming Model Overview MEF Source Code on Codeplex MSDN Magazine Article: Building Composable Apps in .NET 4 with the Managed Extensibility Framework CodeProject Design Patterns: Abstract Factory October 19, 2015 design, development, design patterns, csharp comments edit A few days ago I published a post discussing Factory Method pattern. This article is about the other factory design pattern: Abstract Factory. Use case: Switching between configuration sources easily Imagine in a C# application you accessed ConfigurationManager.AppSettings whenever you needed a value from the configuration. This would essentially be hardcoding the configuration source and it would be hard to change if you needed to switch to another configuration source (database, web service, etc). A nicer way would be to “outsource” the creation of configuration source to another class. What is Abstract Factory? Here’s the official definition from GoF: Provide an interface for creating families of related or dependent objects without specifying their concrete classes. Implementation The application first composes the main class (ArticleFeedGenerator) with the services it will use and starts the process. static void Main(string[] args) { IConfigurationFactory configFactory = new AppConfigConfigurationFactory(); IApiSettings apiSettings = configFactory.GetApiSettings(); IFeedSettings feedSettings = configFactory.GetFeedSettings(); IFeedServiceSettings feedServiceSettings = configFactory.GetFeedServiceSettings(); IS3PublisherSettings s3PublishSettings = configFactory.GetS3PublisherSettings(); IOfflineClientSettings offlineClientSettings = configFactory.GetOfflineClientSettings(); var rareburgClient = new OfflineRareburgClient(offlineClientSettings); var rareburgArticleFeedService = new RareburgArticleFeedService(feedServiceSettings); var publishService = new S3PublishService(s3PublishSettings, feedSettings); var feedGenerator = new ArticleFeedGenerator(rareburgClient, rareburgArticleFeedService, publishService, feedSettings); feedGenerator.Run(); } This version uses AppConfigConfigurationFactory to get the values from the App.config. When I need to switch to DynamoDB which I used in this example all I have to do is replace one line of code in the application: var configFactory = new DynamoDBConfigurationFactory(); With this change alone we are essentially replacing a whole family of related classes. On the factory floor The abstract factory and the concrete factories implement it are shown below: Concrete configuration factories create the classes that deal with specific configuration values (concrete products). For instance AppConfigConfigurationFactory looks like this (simplified for brevity): public class AppConfigConfigurationFactory : IConfigurationFactory { public IApiSettings GetApiSettings() { return new AppConfigApiSettings(); } public IFeedServiceSettings GetFeedServiceSettings() { return new AppConfigFeedServiceSettings(); } } Similarly, DynamoDBConfigurationFactory is responsible for creating concrete classes that access DynamoDB values: public class DynamoDBConfigurationFactory : IConfigurationFactory { protected Table _configTable; public DynamoDBConfigurationFactory() { AmazonDynamoDBClient dynmamoClient = new AmazonDynamoDBClient("accessKey", "secretKey", RegionEndpoint.EUWest1); _configTable = Table.LoadTable(dynmamoClient, "tableName"); } public IApiSettings GetApiSettings() { return new DynamoDBApiSettings(_configTable); } public IFeedServiceSettings GetFeedServiceSettings() { return new DynamoDBFeedServiceSettings(_configTable); } } Notice all the “concrete products” implement the same “abstract product” interface and hence they are interchangable. With the product classes in the picture the diagram now looks like this: Finally let’s have a look at the concrete objects that carry out the actual job. For example the IApiSettings exposes 2 string properties: public interface IApiSettings { string ApiKey { get; } string ApiEndPoint { get; } } If we want to read these values from App.config it’s very straightforward: public class AppConfigApiSettings : IApiSettings { public string ApiKey { get { return ConfigurationManager.AppSettings["Rareburg.ApiKey"]; } } public string ApiEndPoint { get { return ConfigurationManager.AppSettings["Rareburg.ApiEndPoint"]; } } } The DynamoDB version is fairly more complex but it makes no difference from the consumer’s point of view. Here GetValue is a method in the base class that returns the value from the encapsulated Amazon.DynamoDBv2.DocumentModel.Table object. public class DynamoDBApiSettings : DynamoDBSettingsBase, IApiSettings { public DynamoDBApiSettings(Table configTable) : base (configTable) { } public string ApiKey { get { return GetValue("Rareburg.ApiKey"); } } public string ApiEndPoint { get { return GetValue("Rareburg.ApiEndPoint"); } } } The concrete factory is responsible for creating the concrete classes it uses. So the client is completely oblivious to the classes such as DynamoDBApiSettings or AppConfigApiSettings. This means we can add a whole new set of configuration classes (i.e. a web service) and all we have to change in the client code will be one line where we instantiate the concrete factory. This approach also allows us to be more flexible with the concerete class implementations. For example DynamoDB config class family requires a Table object in their constructors. To avoid code repetition I derived them all from a base class and moved the table to the base but the that doesn’t change anything in the client code. Resources Sample project source code Rareburg article feed generator source code Book: GoF Book Book: Head First Design Patterns CodeProject Design Patterns: Factory Method October 16, 2015 design, development, design patterns, csharp comments edit I recently developed a toy project called Rareburg article feed generator. It basically gets the articles from Rareburg.com (a marketplace for collectors) and creates an RSS feed. One challenge I had was picking the feed formatter class (RSS vs Atom). I utilised Factory Method design pattern to solve that and this post discusses the usage of the pattern over that use case. Use case: Creating RSS feed formatter I wanted my feed generator to support both RSS and Atom feeds based on the value specified in the configuration. In order to pass the XML validations they needed to modified a bit and I wanted these details hidden from the client code. What is Factory Method? Here’s the official definition from GoF: Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses Factory Method pattern is one of the creational patterns which make it easy to separate object creation from the actual system. The actual business logic doesn’t need to care about these object creation details. Implementation To see the implementation at a glance let’s have a look at the class diagram: The base abstract class is called ArticleFeedGenerator (it corresponds to Creator in the GoF book). Here’s a shortened version: public abstract class ArticleFeedGenerator { private SyndicationFeedFormatter _feedFormatter; // Factory method public abstract SyndicationFeedFormatter CreateFeedFormatter(); public void Run() { var allArticles = _feedDataClient.GetAllArticles(); _feed = _feedService.GetFeed(allArticles); _feedFormatter = CreateFeedFormatter(); _publishService.Publish(_feedFormatter); } } ArticleFeedGenerator does all the work except creating a conccrete implementation of SyndicationFeedFormatter. It delegates it to the derived class who provide the implementation for CreateFeedFormatter abstract method. And here comes the concrete implementations (which correspond to ConcreteCreator) public class AtomFeedGenerator : ArticleFeedGenerator { public AtomFeedGenerator( IFeedDataClient feedDataClient, IFeedService feedService, IPublishService publishService, IFeedSettings feedSettings) : base (feedDataClient, feedService, publishService, feedSettings) { } public override SyndicationFeedFormatter CreateFeedFormatter() { return new Atom10FeedFormatter(_feed); } } In order to pass RSS validations RSS formatter needed more “love” than the Atom version above: public class RssFeedGenerator : ArticleFeedGenerator { public RssFeedGenerator( IFeedDataClient feedDataClient, IFeedService feedService, IPublishService publishService, IFeedSettings feedSettings) : base (feedDataClient, feedService, publishService, feedSettings) { } public override SyndicationFeedFormatter CreateFeedFormatter() { var formatter = new Rss20FeedFormatter(_feed); formatter.SerializeExtensionsAsAtom = false; XNamespace atom = "http://www.w3.org/2005/Atom"; _feed.AttributeExtensions.Add(new XmlQualifiedName("atom", XNamespace.Xmlns.NamespaceName), atom.NamespaceName); _feed.ElementExtensions.Add(new XElement(atom + "link", new XAttribute("href", _feedSettings.FeedUrl), new XAttribute("rel", "self"), new XAttribute("type", "application/rss+xml"))); return formatter; } } Concerete creators create concrete products but the factory method returns the abstract product which ArticleFeedGenerator (abstract creator) works with. To avoid hard-coding the class name, a helper method called CreateFeedGenerator is added. The client code calls this method to get either a AtomFeedGenerator or a RssFeedGenerator based on configuration value. class Program { static void Main(string[] args) { var feedSettings = new AppConfigFeedSettings(); ArticleFeedGenerator feedGenerator = CreateFeedGenerator(feedSettings); feedGenerator.Run(); } private static ArticleFeedGenerator CreateFeedGenerator(IFeedSettings feedSettings) { string feedFormat = feedSettings.FeedFormat; switch (feedFormat.ToLower()) { case "atom": return new AtomFeedGenerator(new RareburgClient(), new RareburgArticleFeedService(), new S3PublishService(), feedSettings); case "rss": return new RssFeedGenerator(new RareburgClient(), new RareburgArticleFeedService(), new S3PublishService(), feedSettings); default: throw new ArgumentException("Unknown feed format"); } } } Resources Sample project source code Rareburg article feed generator source code Book: GoF Book Book: Head First Design Patterns CodeProject Design Patterns: Strategy October 12, 2015 design, development, design patterns comments edit Often times we need to change some algorithm with another without changing the client code that’s consuming it. In this post I want to show a use case I came across and utilised Strategy pattern. What is it? Here’s the official definition of the pattern from GoF book: Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it. Use case: Get external IP address In an application I was working on I needed to get the external IP address of the computer the application is running on. There are various ways to achieve that. This looked like a good opprtunity to use the Strategy pattern as I wanted to be able switch between the different methods easily. Implementation The interface is quite simple: public interface IIpCheckStrategy { string GetExternalIp(); } Some services return their data in JSON format, some have extra text in it. But encapsulating the algorithms in their own classes this way, the clint code doesn’t have to worry about parsing these various return values. It’s handled in each class. If one service changes it’s output and breaks the implementation I can recover just by changing the code instantiates the class. The concrete implementations of the interface are below. They implement the IIpCheckStrategy and are responsible for getting the data and return parsed IP address as string. AWS IP Checker: public class AwsIPCheckStrategy : IIpCheckStrategy { public string GetExternalIp() { using (var client = new HttpClient()) { client.BaseAddress = new Uri("http://checkip.amazonaws.com/"); string result = client.GetStringAsync("").Result; return result.TrimEnd('\n'); } } } DynDns IP Checker: public class DynDnsIPCheckStrategy : IIpCheckStrategy { public string GetExternalIp() { using (var client = new HttpClient()) { client.BaseAddress = new Uri("http://checkip.dyndns.org/"); HttpResponseMessage response = client.GetAsync("").Result; return HelperMethods.ExtractIPAddress(response.Content.ReadAsStringAsync().Result); } } } Custom IP Checker: public class CustomIpCheckStrategy : IIpCheckStrategy { public string GetExternalIp() { using (var client = new HttpClient()) { client.BaseAddress = new Uri("http://check-ip.herokuapp.com/"); client.DefaultRequestHeaders.Accept.Clear(); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); HttpResponseMessage response = client.GetAsync("").Result; string json = response.Content.ReadAsStringAsync().Result; dynamic ip = Newtonsoft.Json.JsonConvert.DeserializeObject(json); string result = ip.ipAddress; return result; } } } Choosing the algorithm The consumer of the algorithm can pick any class that implements IIpCheckStrategy and switch between them. For example: class StrategyClient1 { public void Execute() { IIpCheckStrategy ipChecker; ipChecker = new DynDnsIPCheckStrategy(); Console.WriteLine(ipChecker.GetExternalIp()); ipChecker = new AwsIPCheckStrategy(); Console.WriteLine(ipChecker.GetExternalIp()); ipChecker = new CustomIpCheckStrategy(); Console.WriteLine(ipChecker.GetExternalIp()); Console.ReadKey(); } } Also the class name to be used can be stored in the configuration in some cases so that it can be changed at runtime without recompiling the application. For instance: class StrategyClient2 { public void Execute() { string ipcheckerTypeName = ConfigurationManager.AppSettings["IPChecker"]; IIpCheckStrategy ipchecker = Assembly.GetExecutingAssembly().CreateInstance(ipcheckerTypeName) as IIpCheckStrategy; Console.WriteLine(ipchecker.GetExternalIp()); } } and the appSettings in the configuration would look like this: Resources Sample source code Head First Design Patterns GoF Book: Design Patterns : Elements of Reusable Object-Oriented Software Pluralsight course: C# Design Strategies DoFactory Strategy Design Pattern CodeProject AWS Lambda Official Scheduling Support October 9, 2015 development, aws, lambda comments edit Just two days ago I was jumping through a lot of hoops (maintain state on S3, subscribe to public SNS topics) just to schedule a Lambda function, as explained here. If I knew all these problems were to be solved in a day, I would just wait! In re-Invent 2015 event AWS announced scheduled event support for Lambda functions. After tweaking around a little bit with the sample I set my schedule like this which will run first day of every month at 10:00: cron(0 10 1 * ? *) That’s all there is to it for a monthly schedule. Reliable and no need for my state management workarounds. So the final code becomes as simple as sending an emali when invoked: Resources AWS Lambda Update – Python, VPC, Increased Function Duration, Scheduling, and More Documentation on scheduling CodeProject Mail automation with AWS Lambda and SNS October 7, 2015 development, aws, lambda, s3, sns comments edit UPDATE: Yesterday (October 8th, 2015) Amazon announced official support for scheduled events so I updated my function to use this feature. For the most up-to-date version of this project please visit the updated version I have a great accountant but he has one flaw: I have to ask for the invoice every month! While waiting for him to automate the process, I decided to automate what I can on my end. There are many ways to skin a cat, as the saying goes, the way I picked for this task was developing an AWS Lambda funciton and trigger it by subscribing to a public SNS topic. Step 1: Prepare a function to send emails Developing a simple node.js function that sends E-mails was simple. First I needed the install two modules: npm install nodemailer npm install nodemailer-smtp-transport And the function is straightforward: var transporter = nodemailer.createTransport(smtpTransport({ host: 'email-smtp.eu-west-1.amazonaws.com', port: 587, auth: { user: '{ACCESS KEY}', pass: '{SECRET KEY}' } })); var text = 'Hi, Invoice! Thanks!'; var mailOptions = { from: 'from@me.net', to: 'to@someone.net', bcc: 'me2@me.com', subject: 'Invoice', text: text }; transporter.sendMail(mailOptions, function(error, info){ if(error){ console.log(error); }else{ console.log('Message sent'); } }); The challenge was deployment as the script had some dependencies. If you choose Edit Inline and just paste the script you would get an error like this "errorMessage": "Cannot find module 'nodemailer'", But it’s very easy to deploy a full package with dependencies. Just zip everything in the folder (wihtout the folder itself) and upload the zip file. The downside of this method is you can no longer edit the code inline. So even just for fixing a trivial typo you need to re-zip and re-upload. Step 2: Schedule the process One simple method to schedule the process is to invoke the method using Powershell and schedule a task to run the script: Invoke-LMFunction -FunctionName automatedEmails -AccessKey accessKey -SecretKey secretKey -Region eu-west-1 But I don’t want a dependency on any machine (local or EC2 instance). Otherwise I could write a few lines of code in C# to do the same job anyway. The idea of using Lambda is to avoid maintenance and let everything run on infrastructure that’s maintained by AWS. Unreliable Town Clock Unfortunately AWS doesn’t provide an easy method to schedule Lambda function invocations. For the sake of simplicity I decided to use Unreliable Town Clock (UTC) which is essentially a public SNS topic that sends “chime” messages every 15 minutes. Since all I need is one email, I don’t care if it skips a beat or two as long as it chimes at least once throughout the day. State management Of course to avoid bombarding my accountant with emails I have to maintain a state so that I would only send one email per month. But Lambda functions must be stateless. Some alternatives are using AWS S3 or DynamoDB. Since all I need is one simple integer value I decided to store in a text file on S3. So first I download the log file and check the last sent email month: function downloadLog(next) { s3.getObject({ Bucket: bucketName, Key: fileName }, next); function checkDate(response, next) { var currentDay = parseInt(event.Records[0].Sns.Message.day); currentMonth = parseInt(event.Records[0].Sns.Message.month); var lastMailMonth = parseInt(response.Body.toString()); if (isNaN(lastMailMonth)) { lastMailMonth = currentMonth - 1; } if ((currentDay == targetDayOfMonth) && (currentMonth > lastMailMonth)) { next(); } } Putting it together So putting it all together the final code is: Let’s see if it’s going to help me get my invoices automatically! Conclusion A better approach would be to check emails for the invoice and send only if it wasn’t received already. Also a copule of reminders after the initial email would be nice. But as my new resolution is to progress in small, incremental steps I’ll call it version 1.0 and leave the remaining tasks for a later version. My main goal was to achieve this task without having to worry about the infrastructure. I still don’t but that’s only because a nice guy (namely Eric Hammond) decided to setup a public service for the rest of us. During my research I came across a few references saying that the same task can be done using AWS Simple Workflow (SWF). I haven’t used this service before. Looked complicated and felt like there is a steep learning curve to go through. In Version 2 I should look into SWF which would… allow me to handle a complex workflow make dependency to public SNS topic redundant handle state properly Resources Send email using nodejs and express in 5 simple steps Using Packages and Native nodejs Modules in AWS Lambda Create a Lambda Function Deployment Package Schedule Recurring AWS Lambda Invocations With The Unreliable Town Clock (UTC) Trigger AWS Lambda Functions Using Amazon Simple Workflow Implementing AWS Lambda Tasks CodeProject Creating PDFs from Images with C# and WPF September 30, 2015 csharp, development, WPF comments edit I like scanning all my documents and keep a digital version as well as the dead-tree version just in case. But storing the documents as individual image files is too messy. There are various solutions to merge image files into a single PDF but I don’t want to upload my sensitive documents to unknown parties so I rolled my own: Image2PDF Implementation I decided to go with a WPF application rathen than a web-based solution because it would be unnecessary to upload a bunch of image files to cloud and download the PDF back. It’s a lot of network traffic for something that can be handled locally much faster. Image2PDF a simple WPF application developed in C#. I’ve used iTextSharp library to create the PDFs. It’s a great open-source library with lots of capabilities. I’ve also started using SyncFusion WPF control library recently. The entire control suite is free of charge (which is always a great price!). It has a Wizard control which I decided to go with. I think using a wizard and is cleaner UI instead of dumping every control on one window. Usage As you might expect you provide the list of input images, re-order them if needed, choose the path for the output folder and go! Step 1: Select input Step 2: Select output Step 3: Go! Simples! Conclusion In the future I’m planning to add various PDF operarions and enhance the functionality but since shipping is a feature I’ll limit the scope for now. After all, this was the main feature I needed anyway. Resources Source code iTextSharp project page SyncFusion Essential Studio for WPF CodeProject Next Blog Archives Copyright ? 2016 - Volkan Paksoy Blog content licensed under the Creative Commons CC BY 2.5 | Site design based on the Greyshade theme under the MIT license

volkanpaksoy.com Whois

Domain Name: volkanpaksoy.com
Registry Domain ID: 1697971820_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.gandi.net
Registrar URL: http://www.gandi.net
Updated Date: 2015-08-14T08:41:44Z
Creation Date: 2012-01-20T22:08:34Z
Registrar Registration Expiration Date: 2024-01-20T22:08:34Z
Registrar: GANDI SAS
Registrar IANA ID: 81
Registrar Abuse Contact Email: abuse@support.gandi.net
Registrar Abuse Contact Phone: +33.170377661
Reseller: Amazon
Domain Status: clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited
Domain Status:
Domain Status:
Domain Status:
Domain Status:
Registry Registrant ID:
Registrant Name: Volkan Paksoy
Registrant Organization: Technova IT Solutions Ltd
Registrant Street: 4th Floor 86-90 Paul Street
Registrant City: London
Registrant State/Province:
Registrant Postal Code: EC2A 4NE
Registrant Country: GB
Registrant Phone: +44.2088522512
Registrant Phone Ext:
Registrant Fax:
Registrant Fax Ext:
Registrant Email: aws@technovaitsolutions.com
Registry Admin ID:
Admin Name: Volkan Paksoy
Admin Organization: Technova IT Solutions Ltd
Admin Street: 4th Floor 86-90 Paul Street
Admin City: London
Admin State/Province:
Admin Postal Code: EC2A 4NE
Admin Country: GB
Admin Phone: +44.2088522512
Admin Phone Ext:
Admin Fax:
Admin Fax Ext:
Admin Email: aws@technovaitsolutions.com
Registry Tech ID:
Tech Name: Volkan Paksoy
Tech Organization: Technova IT Solutions Ltd
Tech Street: 4th Floor 86-90 Paul Street
Tech City: London
Tech State/Province:
Tech Postal Code: EC2A 4NE
Tech Country: GB
Tech Phone: +44.2088522512
Tech Phone Ext:
Tech Fax:
Tech Fax Ext:
Tech Email: aws@technovaitsolutions.com
Name Server: NS-1405.AWSDNS-47.ORG
Name Server: NS-1725.AWSDNS-23.CO.UK
Name Server: NS-472.AWSDNS-59.COM
Name Server: NS-513.AWSDNS-00.NET
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
DNSSEC: Unsigned
URL of the ICANN WHOIS Data Problem Reporting System: http://wdprs.internic.net/
>>> Last update of WHOIS database: 2016-06-24T02:17:30Z <<<
For more information on Whois status codes, please visit
Reseller Email:
Reseller URL:
Personal data access and use are governed by French law, any use for the purpose of unsolicited mass commercial advertising as well as any mass or automated inquiries (for any intent other than the registration or modification of a domain name) are strictly forbidden. Copy of whole or part of our database without Gandi's endorsement is strictly forbidden. 
A dispute over the ownership of a domain name may be subject to the alternate procedure established by the Registry in question or brought before the courts. 
For additional information, please contact us via the following form: