FOSSGiving 2017

Long time readers of this blog and Coder Radio listeners may recall that for the last few Thanksgivings I’ve been writing up or covering on the show my list of open-source tools that I am thankful for that I feel will be significant in the year to come; if you’re curious the first year I did this was 2012.

Electron: I know this might be a little controversial, since there are a lot of strong feelings around the Electron project and how Electron apps use more resources than their native equivalents would. However, it’s undeniable that Electron has had a huge impact on the entire computing community and more specifically those of us who run desktop Linux. Doubt me? Have you used Slack, Atom, or Visual Studio Code at all this year? If so, you’ve used an Electron app. Given the trajectory of JavaScript and web technologies becoming mainstream and full-scale application development technologies with ever increasing performance, you can bet that Electron will continue its trend toward becoming the mainstream consumer and enterprise desktop application development technology for many organizations.

Node.js: I’ve spent a lot of time in the past railing against Node.js being used as a replacement for more traditional web application development technologies, such as Ruby on Rails, but I have to pass the turkey leg to the Node team this year. Over the past few years, Node has been maturing into more of an engine for writing large-scale JavaScript applications and we are really seeing the benefits of that in 2017. For example, my AI Bot Alice is written using the Microsoft Bot Framework using Node. Oh and every Electron app also uses Node as well as most Electron-like frameworks.

Typescript: Microsoft’s Typescript team came to liberate us from the complexity of managing large-scale JavaScript applications, however, Typescript has turned out to be something of a conqueror. With large-scale projects from other major tech vendors transitioning to it (think Google’s Angular), Typescript has gone from one of many compile-to-JavaScript also-rans to all but dominating that field and is now influencing the design of ECMAScript / JavaScript to the benefit of the wider community.

My three picks this year are all along one theme that I think has been emerging for a few years now – the dominance of open web technologies in just about every area of mainstream development. This makes not only practical sense for developers having to manage how they invest their training / education time, but also for business stakeholder who need to get work done as quick as they can and if possible in a cross-platform way. Native app development might make sense in some rare cases, but those are fewer and farther between. I fully expect JavaScript to continue it’s relentless march to world domination well into 2018. Let me know what you think of this list on Twitter and check out The Mad Botter INC.

Galago Pro Review

The new Galago Pro is an exciting new entry into the field of Linux laptops from favorite Linux hardware vendor System 76. Currently, I am running it as my home machine running Pop!OSand doing most of my web and Rails development on it as well as a good deal of scripting for automating some Rails deployments with Docker and Dokku. After running it more for more than 50% of the time and living in it for a few weeks, here are my thoughts.

Build Quality: I don’t want to be to forward but the galago has a nice body — I mean it’s a looker! All kidding aside, the metal build is a big improvement from all the other System 76 laptops available and from most other Linux laptops on the market. While I would have preferred a matte screen, the screen is gorgeous. Of course the elephant in the room is Mac build quality. It’s close but sadly, not quite there. The largest issue is the sound quality on the onboard speakers, it’s not great and for my ears unusable.

Battery Life: I’ll make this quick and brutal. It’s not good. My usage is looking at about 4.5 hours on average. Nowhere need what I need and just bad. If there’s one major issue that needs to be fixed in a second rev, then this is it.

Ports: This machine has ports! USB, USB C, HDMI, ethernet, and a few others. You can live dongle free, the way God intended. The fact that I haven’t had to think about ports or adapters on this machine is great. I like having the option to plug into ethernet when needed. However, I might like to see a rev give up some of the ports in favor of more USB C — that’s just the direction the market is going in and as my good friend Locutus is so fond of reminding me: “resistance is futile”.

Performance: This baby runs great! My only real complaints here are it often sounds like a small drone is attempting to take off from my desk due to fan noise. In my limited research of other Galago users, there does seem to be a correlation between fan noise and the i7 model which is the one I have. It’s entirely possible that the i5s might run with less fan noise, but I haven’t tested that.

Overall, it’s a good solid Linux laptop. If you’re looking to support a Linux-focussed vendor and are in the market, the Galago is worth a look. If you’re looking for a MacBook Pro killer, you might find yourself slightly disappointed. If you liked this post, follow me on Twitter.

Bashing Bash

Ubuntu on Windows
Coder Radio listeners who already caught Monday’s show will know that I have been playing around with Bash on Windows. To be specific I was using the Ubuntu “app” available on the Windows Store and made not modifications to it or my system.

The experience of just getting it installed is pretty terrible. You have to join the Windows Insider program and after you’ve done that restart your computer a number of times until Windows Update notices that you’re in the program and picks up the required update. There’s no direct command line script you can for this you just restart, restart, and restart… until… eventually… it works. Having spent a lot of time in UNIX-like environments, I’ve gotten used to things be a little more precise than “restart and pray.”

Once it got the Windows Insider issue sorted out and installed the Ubuntu “app” from the Windows Store, I started messing around with the BASH interface. On the whole, it functions like a real BASH interface and the commands you’ve been using on Linux / macOS will work just fine. However, that was pretty much the end of my enjoying the experience. Like many things in technology my primary issue is one of expectations. I expected to simply be able to open a BASH window that would start in my home directory and be able to interoperate easily with the local Windows file system. This is not how BASH on Windows works. The Linux file system is separated into an obscure section of the hard disk, making even the simplest tasks of working between the Linux systems and Windows needlessly challenging. From a practical perspective what I wanted to be able to do was edit some code using Visual Studio Code somewhere in my ‘Documents’ directory and run BASH commands against that code easily.

Because I am doing a lot of work that crosses into the legacy Windows world, I really wanted this to work and find it particularly disappointing because other Microsoft tools (Azure, VS Code, and Typescript) have proven to be surprisingly useful for my latest efforts. It would have saved me a ton of headaches. However, it’s just not ready for prime time yet. Ultimately, if you’re working in a Windows environment you’re still better off with just learning Powershell or using something like Cygwin. Let me know what you think on Twitter or in the comments. Also, heard of Docker Compose but not sure what it’s all about? Checkout my quick explainer here.

Pop!_OS First!_LOOK

Pop!_OS
After about three weeks traveling technologically marooned on macOS island, I made a decision to try a new operating system – Pop!_OS. Yes, I’ve been using Ubuntu Gnome for the majority of my home office computing but due to some odd proprietary VPN requirements that only work on macOS and Windows, I’ve defaulted to traveling with my MacBook Pro. No more! From now on my System 76 Lemur will be dual-booting Windows 10 and Pop!_OS with Pop as the primary operating system from most of my development work. It’s been a few days and I have some initial impressions of Pop.

At first glance, Pop looks bright and modern, blending the simplicity of WebOS with the Material Design aesthetic of Android. Long time Ubuntu users will notice some similarities to the soon to be discontinued Unity user interface from Canonical. On the whole, I like the bright aesthetic. For the most part System 76 has done a good job of creating icons for common applications that it’s target market of “makers”, however, it is very easy to find icons that do not match the visual design language and they stick out like a sore thumb.
enter image description here
I particularly appreciated that my editor of choice, Visual Studio Code, and some of the more common text editors used by developers have on-brand icons that fit into the overall system well.
enter image description here

From a practical perspective Pop is little more than a flavor of Ubuntu. That may sounds like a dig, but it’s actually one of the best advantages of Pop, since that means it has access to all the Ubuntu repositories and is compatible with all Ubuntu software packages. This was actually very helpful to me, since it allowed all of my system bootstrap scripts to run unmodified to setup my new install.

So far, my usage of Pop has been pretty smooth. It seems to be fast and snappy. The one point of UX annoyance is the lock screen; it’s just the Gnome one and the requirement to slide up to login is annoying. I’d like to see a customized lock screen that was more on brand with the design and didn’t require that extra step.

It’s important to note that Pop is not yet a fully released product, so a lot could change / mature in the coming months. I definitely like the direction it’s going in but I do think that if it’s going to be branded a OS for “makers” there should be something like per-configuration profiles / setups similar to what Dell does on the Sputnik laptops; the idea there is that you’d have profiles for different types of “makers” that automatically installs industry standard FOSS tools for them. Maybe that’s not something advanced users would use much, but it would make provisioning a small shop with Pop much easier. Also, they should just brand it “Pop”, not Pop!_OS as the current name is confusing.

Want to learn more about what I’m doing in the AI / bot space? Follow me on Twitter.

Pallet Town: Giving Into Javascript Classes

Coder Radio listeners will know that I have some strong feelings on classes in Javascript and most of them are pretty negative. After some extensive time (measured in years) of exploring alternatives to pure Javascript such as CoffeeScript and Typescript, I ended up going back to pure Javascript but remained Typescript curious. That takes us to the present day. I find myself working on a very large, feature rich, and complex Javascript codebase that other people will eventually be working on with me. I’ve long railed that classical inheritance has no place in Javascript, but the reality of the situation is that it makes sense for some use cases and (a much more severe issue) most developers you’d hire out of school and sadly many more experience ones have no frame of reference outside of the classical OO as seen in languages like Java and C#.

Presented with this combination of issues, I naturally looked to include some Typescript classes in my code for some underlying data structures that the software is going to rely on. Unfortunately, I hit a pretty fast issue — require statements. My code base is based on Node.js and uses require heavily. In simple cases, that’s no problem in simple cases:

var Pokemon = require('pokemon') // Javascript

import Pokemon from 'pokemon' // Typescript

That’s not so bad, but things can get a little less elegant very fast. For instance, let’s say you’re using MongoDB and importing it via require:

var MongoClient = require('mongodb').MongoClient

There are of course ways to do this but none of them were elegant and simple enough for my tastes. As an aside, if your application uses MongoDB via the Mongoose driver, then you’re good to go with this Typescript statement:

import * as mongoose from 'mongoose'

My application, however, is using the Mongo driver directly and for some business specific reasons this was not an option, so I said ’so long to Typescript’ for this project and looked for another solution to my classical OO problem.

The solution I landed on was using the class functionality introduced in Javascript in ES6. Let’s take a look at a sample of what this might look like assuming that were are working on a Node.js based application that needs to pull in Mongo in the same way that my real world one does:

// importing Mongo
var MongoClient = require('mongodb').MongoClient;
var mongo_url = 'some_url_for_database_access';
// class declaration
class Pokemon {
	constructor(name, type, level) {
		this.name = name;
		this.type = type;
		this.level = level;
	}
	
	save(callback) {
		let pokemon = this;
		MongoClient.connect(mongo_url, function (err, db) {
			if (err) throw err;
			db.collection('pokemon').insertOne(JSON.stringify(pokemon), function (err, result) {
				if (err == null) callback();
			}); 
		}
	}
}
module.exports = Pokemon; // exporting for use elsewhere

That’s a pretty simple implementation but is enough to get a good idea of the basic structure of Javascript classes. One aspect that I am finding interesting about working with them is that it’s possible (and as far as I can see a good practice) to use them as sparingly as possible, hopefully avoiding the deep coupling between classes that tends to occur in large scale applications written in a classical OO style. So far, my thinking is that these types of classes are appropriate for base data structures that your application relies on, but the majority of your code should be written in a more functional style.

I hope this helps and if you want to reach out please find me on Twitter and if you’re interested in seeing something cool pretty soon check out The Mad Botter.

Google Kotlin!

Google announced that the Kotlin programming language will now be a first class citizen alongside of Java for Android development. Since Apple had announced Swift as the replacement for Objective-C as the iOS app development programming language, there’s been a lot of speculation on what Google might do in response; some commentators even speculated that Google might also go with Swift. Google has now made their move and it’s a really smart one. Coder Radio listeners will know that I’ve been a long-time fan of Java, but I have to say if there’s going to be a successor to it, then Kotlin seems like a great choice.

For starters, it targets the JVM. That means that Kotlin code can be run on a wide variety of architectures and devices. For the purposes of Android development this was essential, since Android uses a “version” of the JVM; there’s been some legal cloud about what exactly Google’s Davlik VM was a bit too much like the JVM for Oracle. If you’ve never had a “standard” disappear on you, then you might not understand how much of risk there is in technologies simply evaporating into the air, leaving you and your users high and dry.

Speaking of the longevity of technologies (especially in the enterprise), Kotlin targets Java 6. What that means is the there’s a huge pool of enterprise Java applications that it can work on today using its interoperability with Java. In practical terms, this means that there’s no need to rewrite your legacy Java application to start writing modules in Kotlin.

Java can be a little strict in terms of checked exceptions. Basically, in Java, if there’s a possibility to throw an exception it must be checked with a try / catch block. In larger projects this makes you’re code pretty verbose and I’ve long found this to be one most annoying aspects of Java day to day. Kotlin does not have checked exceptions, giving you more flexibility in how you write your code and reducing all around cruft.

If you’re an Android developer or just a Java developer in general, I urge you to take a look at Kotlin and form your own opinion. Also, if you’re interested in learning more about Docker or DevOps in general, take a look at my Docker Compose Quickstart Guide and follow me on Twitter!

Hybrid Today, Progressive Web App Tomorrow

App (and I am using that term very loosely here) development has undergone a change. Most companies are eschewing high cost native development and for iOS and Android and going with hybrid solutions using tools like Xamarin or Ionic. This is a great way for organizations to lower their initial development and ongoing maintenance costs as well as get a useful app for their business needs. One area that many organizations are finding is that they still need desktop web applications and you don’t get the code sharing advantages between mobile and desktop platforms that you do between the two dominant mobile platforms – iOS and Android. Luckily, the march of development tools and frameworks has carried on and there’s a new solution – progressive web apps. These are web applications that live on the server but thanks to powerful JavaScript (or in some cases Typescript) can access native-like device capabilities. This, coupled with responsive development techniques and some adaptive CSS allows the app to scale in not only screen-size but also capabilities depending on the device. There are a number of frameworks that provide this, but my two favorites are Angular 4 and Polymer 2.

The Angular team now uses the tagline “one framework, mobile and desktop” and they mean it. Angular has come a long way from the original release of AngularJS and is now a full application development framework.  Currently, Angular is on version four and if you’ve used an older version of the framework you’re going to find major differences (especially in the area of routing) but the time invested into catching up is well worth it. Using Angular 4 you can now create a high-functioning application that runs on everything from your twenty-seven inch desktop screen all the way down to a mobile phone. I do find that there’s a bit more ceremony in the latest versions of Angular than in the first, but you could argue that’s an advantage, since it also makes the framework more flexible than its predecessor.

Polymer 2 is more focused on JavaScript components than Angular but is no less powerful. While the details of it works under the hood are different the end result is exactly the same — you end up with a powerful application that can scale for different screen sizes and devices.  If I had to make a comparison between Angular and Polymer 2, it would be that the Polymer team has made more of an effort to be “pure JavaScript.” I don’t love that criticism of Polymer for two reasons: 1) Angular is now written in Typescript and I (in my opinion) best consumed using Typescript and 2) some of the custom directives in Angular actually lead to less boilerplate than the more “pure JavaScript” alternatives in Polymer (especially around defining custom components), so I’d actually consider that a feature for Angular rather than a bug. Still, that doesn’t mean that Polymer is not a great choice for your progressive web app development needs.

There are of course other choices for developing progressive web apps but these two projects have the backing of Google and large communities around them, so they’re likely going to stand the test of time unlike the many <insert_noun.js>  JavaScript frameworks of the previous ten years.  The future looks bright for the progressive web app and by extension Angular and Polymer, given that the trend of businesses demanding more for less out of their developers and development partners is likely to continue.

If you liked this post, follow me on Twitter for more!

Why I’m Selling My MacBook Pro – Focus

In a word – “focus.” There are a lot of cool technologies available to developers to today and the truth is that, I’ve been spending a lot of time chasing a lot of different albeit very interesting technologies and trying to figure out what makes sense for myself and for Buccaneer. Here’s just a brief list of the things that I’ve found incredibly interesting technology-wise over the last eighteen months:  Docker, Swift, Linux, iOS development, Android development, Arduino, 3D Printing, DevOps, Angular, React, and about a thousand other things. All of them are very cool, but there’s not a lot of depth one developer can get in any given technology if he or she is focusing on more than one or two of them at a time.

So what does this have to do with selling a Mac. Well, I spent years writing iOS apps in Objective-C and a significantly smaller amount of time writing them in Swift and that was fun for a long time, but now Buccaneer and I have moved on to the exciting world of Containerization via Docker and the wider DevOps space.

For a time, I was trying to juggle these two priorities but what I found was that if you have two messages, then all you’re really messaging is a cacophony of white noise. Another aspect of this is looking into the future based on the current tech trends. While there still are some advantages to native apps over hybrid apps, most enterprise customers are correctly focusing on hybrid or web-powered apps or those who have a little more tolerance for a performance hit are skating to where the puck is likely going in the form of progress web apps. Enterprises are really deciding between hybrid apps with tools like React or Ionic or full on progressive web apps using something like Angular or Google’s Polymer. The reasons for this tend to be the usual development and maintenance costs arguments but also that most of the complexity in enterprise systems tends to be on the back-end rather than the client-side.

My solution is extremely simple — to just skate to where the puck is going and that’s toward thinner clients with complicated back-end systems. These back-ends will be hard to maintain and that’s where containerization provides value and I find that sort of work is best done on a Linux workstation, so I’m going Linux 100%.

Mac Exodus Over?

Many commentators myself included have been making some hay out of the trend of developers and other pros moving away from Apple’s macOS in favor of various (usually Ubuntu) distributions of Linux. Vendors like Dell and System76 have seen gains in the professional workstation market against the less then well received MacBook Pro, but Apple is waking up and smelling the professional angst. Apple’s pronouncements in favor of professional computing on macOS and the promise of a revised MacBook Pro as well as a re-designed Mac Pro with a more “modular” design. We’re already seeing the so called Mac Exodus being blunted by Apple’s announcement. The questions becomes less a contest of Linux vs macOS quality and more a race against the inevitable tide of macOS’s professional resurgence. The overall goal for Dell and System76 should be to gain as much market share in the professional workstation space before Apple actually launches new hardware for that market. To that end, I’m going to play “CEO for a day” of Dell and System76 and game out a strategy for both of them respectively. I’m picking on these two firms, because I like them and also feel like they have the best shot of actually being successful.

Dell has money. Lots and lots of money. That’s great but also can lead to conservatism. Their success with the Sputnik project was one of the early and most successful ventures of a major desktop manufacturer into the Linux space. The product it produced – the XPS 13 Developer Edition – is still one of the most compelling Ubuntu laptops available. Dell needs to widen their Ubuntu product-line to include larger higher power models as well as something more akin to the MacBook Air. There will be an R&D / product development cost to this, but it’s going to be worth spending. The other key here is that Dell has a huge asset that System76 won’t – it controls its own production pipeline and has the manufacture of PCs down to a science. That should lead to better yield over competitors which at any reasonable volume means there are some margins to play with there. Dell should cut these margins on select base models of Ubuntu Linux workstations to the bone, nearly selling them at cost. This will make a dramatic cost comparison against Apple, given their already high prices and should also make Dell a very attractive supplier to creative agencies and the like as they look to cut costs in an increasingly competitive environment. Remember, the goal here is to gain market share fast and hopefully create career spanning Linux customers who otherwise would have gone to Apple.

System76 doesn’t have Dell money but it has something else focus. In many ways, they’re already taking the right steps to up their hardware game by moving away from Clevo and Sager hardware and toward producing their own, but more can be done. My expectation is that within the next eighteen months we are going to see more Apple quality hardware from them once their production lines and processes are fully up and running. Sadly, some of it is going to come at a greater cost than money. System76 has good relations with the Linux community and in particular the Ubuntu community. Canonical (the company behind Ubuntu), in what can be described charitably as a pivot toward reality, is dropping its Unity desktop user interface in favor of GNOME and seems to be more focused on IOT and “cloud computing” than the desktop. This makes sense, given that Canonical has limited resources and needs to make real money somehow, someday, someway. The folks at System76 who I’ve met and like very much need to find a way to show leadership in the community by guiding it into a direction that strengthens the Ubuntu desktop as the leading choice for professional workstations. The key here is to lead the community in the right direction but resist the temptation to commit too much of their own limited development resources to the effort. I know what I am suggesting is less being a good community citizen and more leveraging the community, but the reality is that the Linux community has been wasting development resources on alternatives to alternatives for things like package management and window managers — strong leadership could finally close some of these questions and focus the communities efforts.

This is a race against the clock and make no mistake, the window is closing quickly. If Linux workstation vendors such as Dell and System76 can’t make significant gains in market share quickly, then this whole “Mac Exodus” will be little more than a blip in the history of Apple’s domination of the modern professional workstation market. If you have any questions or comments, Tweet me and please checkout my Youtube channel where I offer Docker and DevOps tips.

MacBook Pro 2016 Review

I’ve been having a lot of fun working on Linux over the last few months and continue to use it as one of my two daily drivers, but the realities of corporate VPN policies and the fact that even Ubuntu is not supported by most common corporate VPN clients forced me to pick up a Mac. Being the type of guy that likes to go big or not at all, I went for the 15” MacBook Pro. Take a look:

Sure it’s not the absolute highest end Mac you can buy but it’s by no means a slouch. Here’s some thoughts after working with it for about two weeks.

The Good: The MacBook Pro has the best screen I’ve ever seen in any laptop in over a decade of being primarily a laptop user. The build quality is very good and the “Space Gray” gives it that “pro” feel I’ve found missing in Apple products for some time. While I’ve been pretty critical of the full-fledged (one might even say “courages” adoption of USB-C over USB3), I see the long-term value in having all your devices use a standard port for charging and data transfer, however, I question if Apple would be willing to have the iPhone adopt the standard as well in it’s next iteration.

The Bad: While the MacBook Pro feels premium in practice its performance doesn’t feel like the nearly $3,000 I paid for it. Knowing that it doesn’t run Kaby Lake may be part of my problem and certainly for that amount of money, I’d like to at least have the option of more than 16GB of RAM. Also, the price jump between storage configurations feels a lot like gouging. All in all, the worst part of this machine is the price tag and feeling that you have “SUCKER” painted on your forehead when you compare the price to the specs.

The Ugly: I’ve developed Mac apps and iOS apps for a long time and in the past I’ve be skeptical of Apple’s additions to both platforms. In most cases, I’ve been forced to at least concede that some users may like some features. This simply is not the case for the TouchBar. Try as I might to find a productive and interesting use for it over it’s classic function row predecessor, I’m left feeling like I am holding what in ten years will be a curiosity of Apple hardware design history that is unlikely to be widely adopted by developers let alone repeated in other product lines. I also can’t help but feel that the TouchBar is at least partially responsible for the hefty price tag.

Overall, the MacBook Pro is a fine tool and it does a job that I need done. If you’re looking to fall in love with a device, you’re in the wrong place. However, if you are like me and consider your machines tools to do work on (much a like a carpenter might look at a particular table saw or hammer), then you’re likely going to get some return on your investment and be happy in the end.