Tag Archive for programming

Programming Pitfalls: Circular Imports in Objective-C

PitfallI spend a lot of time in XCode working in Objective-C for iOS and Mac OS X projects. In many ways, Objective-C has become my “native language” in terms of programming languages. Still, that doesn’t mean that I can’t make really silly mistakes when working in that language. In fact, through the magic of GitHub you can see the mistake in real-time in my open-source Cocoa / Cocoa Touch networking library, MDNetworking.

The mistake is, like the best dumb mistakes, small and simple but can take more time than I’d like to admit to notice and resolve. Take a look at this pseudo-code sample for what went wrong:

#import "Foo.h"
#import "Bar.h"

@implementation Foo
 // CODE
@end

 

#import "Bar.h"
#import "Foo.h"

@implementation Bar
// CODE
@end

That’s it. That’s all you need to do to reproduce this programming pitfall. As always, I hope this has been illuminating and entertaining for you and please do follow me on Twitter.

Coding is More than Just Code

ScreenHunter_1760-Feb.-15-09.35Every week I do a little online radio show called Coder Radio with incomparable Chris Fisher. Most of the show’s feedback is overwhelmingly positive, however, there is, on occasion, a comment made about the show lacking in “code”. In the past, I’ve brushed these comments off by reading a little Objective-C or Ruby on air in order to make a point — the point being that reading code on air or getting extremely technical is more than a little boring. Still, I’m doing more than trying to entertain by not reading out code on air or going into the nitty gritty of the latest hot API.

To start, APIs change and like all creative works (yes, coding is a creative exercise) are products of the time and place they were written in. As a thought exercise let’s take a look at Dropbox’s new mobile APIs aimed at handling what (boiled down to its most basic oversimplified definition) could be called super-caching. That’s a really good idea in 2013, when a lot of people have smart devices but terrible mobile service. However, it’ll seem pretty quant in ten or so years when the world is blanketed with the equivalent of Fios over the airwaves at an affordable price. Of course there is  an aspect of the show that focuses on what’s new or ‘in”, but on the whole I try to keep the show as “evergreen” as possible.

I like C# but maybe you don’t. Would you really want to listen to someone go on and on about the particulars of C#? Conversely, maybe you like PHP, but I only go in depth on C family languages — is that giving you a lot of value? When it comes down to it the particulars of a given language are just that particular to that specific language. Larger concepts, however, can provide value to all sorts of developers working on any platform.

I’ve noticed that it is mostly the younger set of the audience that seems to have the issue and to a point that makes sense. After all, when you are just starting out it seems like learning the API for whatever platform you are working on is your highest priority, because you are under a lot of pressure to actually prove yourself. However, once you get beyond  that initial green period, you’ll likely find that being successful in the industry has very little to do with ho well you know a particular API but rather how well you can navigate processes, understand other disciplines including business and business development, and (I know this will sting a few of you) interact with people in meaningful and pleasant ways.

The bottom line is Coder Radio, like software development itself, is and will continue to be about far more than code. Feel free to comment on Twitter or Google+

Programming Pitfalls: WinRT MediaElement URL Scheme

I’ve been doing a good deal of C# WinRT development recently and for the most part it hasn’t been bad. This week, however, I found a pitfall that is not only so simple it’s silly but also managed to waste an hour or so of my time. WinRT has a class called MediaElement that allows you to play different types of media using Window’s built in media engine.

As you might expect, instances of MediaElement take Uri’s for their source media.  So, let’s say you want to play a video from your app’s bundle — perhaps an introductory video or something like that. You might try:

// I am assuming you created a MediaElement called "player" in your XAML
player.Source = new Uri("/Appname/Assets/Media/Video.wmv" UriKind.Absolute);
player.Play();

Sadly, that will crash every-time. The good news is that your logic is fine, but the bad news is that you are missing a silly implementation detail of how Microsoft has decided to refer to in bundle URL’s.  To make that code work, you simply need to make one small change:

// I am assuming you created a MediaElement called "player" in your XAML
player.Source = new Uri("ms-appx:/Appname/Assets/Media/Video.wmv" UriKind.Absolute);
player.Play();

That’s it. Clearly, there is a little bit of magic here… hence the need for the prefix, but it works and is the prescribed way to do this according to MSDN. Hope that helps someone. Questions? Comments? Find me on Twitter and Google+. This post was brought to you by Code Journal and Fingertip Tech, INC.

My Android Christmas Wish

Android developers will know that there are some serious issues of fragmentation on the platform. These issue have been well recorded and, at times, greatly exaggerated. The truth is that in the last year Google has taken great steps toward making the Android development experience more on par with iOS and it could be argued that developing for Android is now as enjoyable as developing for iOS. However, there is still one larger of weakness is the blessed Android development tool chain: a good GUI designer.

Let’s face it — there really aren’t any good GUI designers for relative layouts. Certainly, the one Android developers are provided in Eclipse is a far cry for Apple’s Interface Builder. To be fair, Interface Builder traditionally has not had to deal with relative layouts and there is an element of added complexity in dealing with them. Still, that doesn’t excuse Google’s lackluster tool.

Like many Android developers, I’ve taken to working almost exclusively with layouts by editing the actual XML code. There’s nothing wrong with this, but it can be somewhat tedious when developing a complicated layout, since you have to keep rebuilding your application as you make changes to the layout.

If Google could improve or replace the Android layout tooling, I am confident that we would see more complex and attractive user experience in our Android apps. So, what do you say, Google? Questions? Comments? Find me on Twitter and Google+. This post was brought to you by Code Journal and Fingertip Tech, INC.

The Best Feature…

Code Journal LogoI’ve been hard at work mapping out the future of Code Journal  and took some time to go through user e-mails that requested new features; the idea was that I would simply implement the feature that the most users had e-mailed asking for. That feature was a resizable UI.

The problem was that the people, my customers who I of course care a great deal about, were very passionate about this particular feature and the feature seemed simple enough to implement; after all, I’ve implemented resizable UI’s on a number of Mac OS X apps for clients, so why couldn’t I apply the same the techniques to Code Journal? Right? Wrong.

Code Journal started out pretty simple; it was an app that pulled and processed JSON data from the Github API; in fact, the most complicated thing in the private demo (think alpha) version of the app was Github’s OAuth. I was thrilled. The app was nice and simple.

Overtime and as it closer to the all important 1.0, compromises had to be made. There is no need to enumerate them here, but suffice it say that one current feature (Github Enterprise support) is responsible for a disproportionate amount of code and almost all of my support requests. Does that mean I’ll be pulling enterprise support? Absolutely not. The truth is that these request have more to do with customers having non-standard settings on their Github Enterprise deployment or the server that host them, rather than anything to do with Code Journal. Enterprise support is a slam dunk and I personally love being able to user Cde Journal with clients who have Github Enterprise servers; best part is that most of the time all I have to do is fill out the enterprise field in the app with my credentials and we are good to go — no need to bother a sys-admin at all.

In short, the complexity that supporting enterprise customers caused was a good thing. Back to the feature request: resizable interfaces.  I understand why some people might want this feature, but, to be honest, I don’t think it’s a good idea. Even on my 27 inch Cinema Display, I often find myself pressed for screen real estate when tacking some hairy memory management problem or am “in the zone.” Like many of you, I am not willing to compromise by putting my monitoring tools into a separate space and, again like most of you I imagine,  I only have the one screen. The other issue of course is that Code Journal is a Mac app and a lot developers that work on Macs work on 13” laptops.

Still, it’s hard to ignore your paying customers, so I went against my instincts and began coding.  I won’t bore you with the details, but it was doomed from the jump. I spent two weeks coding and agonizing over resizing table views and experimenting with Auto Layout. What resulted wasn’t terrible but had a number of rough edges and, more importantly, was not a feature that I could see myself using and believe would have been a major source of maintenance. The feature is not going to be released. At this point in the product’s life, it is more important to keep complexity out of the equation than to add every suggested feature; this particularly true of features that don’t provide any additional functionality. The best feature is the one you never have to maintain.

Comments? Questions? Share them with me on Google+ or Twitter. This post was made possible by Code Journal and Fingertip Tech, INC. If you are Github user please check out Code Journal and if you are interested in having an Android, iOS, or web app developed please contact me. Also, check out the new free version of Code Journal for iOS.

 

Dependency Anti-Patterns

Open source frameworks are great and I encourage you to use them when appropriate. However, there is a such a thing as too much of a good thing. There is a point where your project becomes more a stitched together Frankenstein’s monster than an elegant well crafted piece of software. Of course this doesn’t just happen by magical fairies adding dependencies; there are warning signs that you may be in a bad spot with your use of open-source before it becomes a real issue.

 

In no way do intend to criticize the proper use of open-source software. In fact, I still use ASI and similar plumbing tools in many of my projects. The trick is that I always have an exit strategy; for example, in the case of ASI, if the time ever came to switch I’d probably move over to the more modern AFNetworking.  On the web side of life, I tend to use JQuery fairly heavily but always keep an eye out for smaller (possibly leaner) alternatives.

 

If you don’t have an exit strategy, things can get very bad very quickly for you if the project you rely on is either canceled or stops being updated and becomes too out of date to easily use in your domain. But why? Why do we do this ourselves? Why do we keep inheriting projects with huge and in many cases unneeded dependencies? Aren’t there warnings? There are warning signs and I’ve seen some common patterns in the (often flawed) logic of some of the worst offenders.

 

Pre 1.0: Lately, I’ve been seeing a disturbing quantity of pre 1.0 software being used in production. This is madness. Pure madness. If the creators / maintainers of project don’t think it is production ready, then you simply should not be using it in production. If you do and run into issues, you pretty much deserve what you get.

 

Fear: This has to be the most common reason that I see developers take huge dependencies that are more than a little questionable. Many developer fear technology that they don’t understand or don’t have experience with and attempt to abstract it away with clunky frameworks.   This is pretty common in the Apple space (iOS / Mac OS X), since many developers fear Core Data and go through some pretty silly hoops to avoid using it directly. These ‘hoops’ tend to be large frameworks that your entire project ends of being written around, making removing them later a costly and painful endeavour. In the example of Core Data, the irony is that Core Data is itself an abstraction designed to simplify working with datastores.

 

Closed Mindedness: This is another area where the Core Data example comes into play. There are a number of newer iOS developers who feel that Core Data should be just like Rails / Activerecord, so they turn to dubious frameworks to get that. Again, this is particularly questionable since Core Data already supplies a style of object relational mapping and is in itself an abstraction. Not to pick on Rails folks too much, but I’ve been seeing a scary number of ‘Rails-like’ frameworks popping up. The issue with this is that Rails already exists and  does a good job solving the problems it was designed to solve, however, it was not designed to solve all problems; other approaches and tools would be more appropriate for other problems. Of course this issue can go all the way down to the language level. For example, I’m often forced to work with Objective-C that has been written as though it were Java by people who seem to have no understanding of the messaging paradigm in Objective-C and are more than happy to force Java concepts / practices on the language whether they fit or not.

 

Budget: This is the most understandable reason for taking on large dependencies.  Today’s project managers are severely constrained in terms of budget available per project and there is a real need to cut costs. I have a lot of sympathy for this one. However, you should remember that there is more to a project’s cost than just the initial development cost; there is also maintanence.  If you do take one of these large dependencies make sure you have a roadmap for you will maintain and grow with the dependency itself; there is nothing worse than building an app on framework A and realising a year later that it needs to be substantially rewritten in framework B to adapt to your changing needs or scale.

 

 

That’s all for me. Let me know what you think on Google+, Twitter, or App.net. Buy Code Journal!

Simply Loading TableViewCells from a Nib

UITableViews drive a significant portion of iOS apps. Apple has provided some pretty great tools for developing decent looking apps based on them, but there are times when you might want to do something a little more stylish than what iOS provides. That’s when you probably consider writing your own UITableViewCell subclass or at least adding a category to UITableViewCell, though I find that is probably the less appropriate option in most cases. Getting your cells to look just right can be more than a little time consuming. Luckily, there is a bit of a shortcut you can take: loading the cells from a nib.

Before I show you how to do this, you should know going in that there are some drawbacks to doing this and that this is not an appropriate solution a some cases. Loading cells from a nib is less performant than just drawing the entire thing in code. Generally, you would not want to do something like this in a resource intensive app, such  as a game for example.  Ok, now that we have gotten that disclaimer out of the way, let’s get back to the point:

- (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier
{
   self = [super initWithStyle:style reuseIdentifier:reuseIdentifier];
   if (self) {
       NSArray* nibContents = [[NSBundle mainBundle] loadNibNamed:@"SpecialCell" owner:self options:NULL];
       NSEnumerator* nibEnumerator = [nibContents objectEnumerator];
       NSObject* nibObject = nil;
       while ((nibObject = [nibEnumerator nextObject]) != nil) {
           if ([nibObject isKindOfClass:[SpecialCell class]]) {
               self = (SpecialCell *)nibObject;
               break;
           }
       }
   }
   return self;
}

The above code presupposes that SpecialCell is a subclass of UITableViewCell; I omitted the header for the sake of brevity. So, let’s get the obvious stuff out of the way. This method is overwriting initWithStyle and yes you could write a second method rather than overwriting that one. We have the normal if (self) check that we see in inits. All pretty much standard.

Ok, so what is actually loading the nib? Well, if you look nib is being loaded and its contents are being placed in the nibContents array. Once the the contents of the nib are all in the array, an enumerator is created based on it. After that we simply loop through all of the objects, assign self the the one whose class is SpecialCell and return it.
Pretty easy, right? Just keep in mind that this can be a source of slow down on your table and may not be an appropriate solution for your app, but this can be a good time saver for prototypes or non resource intensive apps. If you have any feedback I can found Twitter or Google+.

On Finding a Java Intern / Freelancer

Initially, I planned on working through a fast API in each of the software stacks that I had written about in a previous article. Naturally, a Java based solution was near the top of my list going in. There were a number of reasons for that including my familiarity with the language and the availability of cheap help;  after all, pretty much every college teaches Java and students are always looking for some part time work.

If you have ever had to manage a project you probably already see the fallacy in my line of thinking. That’s right I blindly assumed that throwing more man hours at the project would lead to more productivity; i.e. I foolish assumed that there was a positive and casual relationship between time spent on the project and progress.

So, I got in touch with an old professor and asked to speak to third and fourth year students that were studying programming. Being from the university’s English department, he quickly introduced me to colleague in the appropriate department who eagerly informed me that a heavy emphasis had been put on practical skills and in particular Java in her department since I had graduated and that she was sure I would quickly find appropriate candidates.

I was happy to hear it. I gave her the go ahead to give my e-mail to her students and began the vetting process. Right off the bat I was seeing some strange things: some students proudly boasted Excel macros as their highest programming skill, many others failed to list a project they had done on their resumes, and still others failed to list any programming or programming related courses or competency. Still, I e-mailed several students and set up phone screens with them.

During the screens I asked all the students if they had ever worked with source control. Most said they had heard of it but had not worked with it. Those who had had only worked with SVN and did not know what a distributed source control system is. To be honest, I wasn’t surprised by this. I assumed that most professors would be focusing on somewhat older and more widely excepted enterprise tools rather than newer alternatives. Still, not knowing what source control is in your last year of a development focused degree is a huge red flag.

Disappointed, I moved on to my second line of questioning: what experience do you have with UNIX-like operating systems. To my surprise, only one of the students had ever really used a UNIX-like operating system (Ubuntu in his case) and he was on the IT support track rather than the software development track. I expected most of them to working on Windows PC’s day to day, but assumed that they would have had some training on a UNIX system beyond a history lesson. Ultimately, knowing how to work on a Mac or Linux box would not be essential to the work I needed done, so I didn’t press the point too hard.

I moved on to asking some technical questions. Most of students were able answer the obligatory: what is the JVM?, what is garbage collection?, what is the difference between a class and an object?, what is inheritance? Thankfully, most of the students were able to answer these questions to my satisfaction.

I finished the call by asking each student to tell me about a project they had worked on in Java or any other programming language. This is the question that made me decide to do the work myself. Not a single student had done any programming during the college career beyond small code snippets for assignments. No Github projects and no Super Mario clones. I was crushed.

At this point I realised that adding any of the students I had spoken with to the project would not only cost whatever I ended up paying them, but would also lead to some scary bugs that would take more time on my end to fix than doing the actual coding myself. Basically, more hours would probably lead to less progress and I just can’t have that. Ultimately, I wrote the professor and explained that I did not feel that any of the students she had sent were qualified for the project and thanked her for her time.

Did you take Computer Science? Did you get any practical training if you did? I can be reached on Twitter or Google+.

Also, if you are a student who knows some Java and is looking for some freelance work contact me on Twitter or Google+.

Picking a Backend Software Stack pt1

If you have every been lucky enough to be able to pick the technology stack that you work on you know how exciting the process can be. However, it can also be a daunting task. Sure if you are developing an Android app or an iOS app you are pretty limited on what software stack you can use, so that makes your choice pretty easy. The real fun comes in when you are developing  a back-end service for an app.

Open-Source or proprietary? REST or SOAP? Windows or Unix? Dedicated hardware or cloud solution? These are  just a few of the decisions you will have to make before you can begin coding your web-service.

Before we get too far off the rails here, I should explain why I am writing this now. If it weren’t obvious enough I am writing  a back-end service for a mobile app and the best part is that I am the client. That’s right no worrying about a client’s IT staff not being familiar with UNIX-like systems (a lot more common than you’d think) or having a culture that doesn’t want to implement anymore services in their aging technology but is at the same time unwilling to look at any more modern solutions; believe it or not, there is a lot of VB6 code still running out there. I’m not bitter. I promise.

So, as the client, what do I want? Well, I basically need a web-service that will return JSON to mobile clients, interact with social networking services, store user data, be fast to implement, easy to maintain, and deploy-able to a number of possible cloud hosting solutions. I am trying to do this project as lean as possible. Meaning I want to fail fast and as cheaply as possible. Tall order, I know, but I think the developer (yours truly) is up to the task.

OK so where does this leave us? Well, we know what I want and now we need to take a look at the options. For the purposes of this series I am going to limit my choices to the following platforms: Ruby on Rails, Sinatra (Ruby), Play (Java), ASP.Net (C#), NodeJS(JavaScript), and CakePHP. From here, I am going to go through each option in its own post and discuss the pros and cons of each technology. Check back soon for part two].

Questions? Comments? Hate mail? Find me on Twitter or Google+.

Frauds and Other Impostors

Impostors. Frauds. We’ve all hear of them and we all go out of our way during the hiring process to avoid them, but do they really exist? Sure, there are those that might not do as good a job as others, but is there really anyone applying for programming jobs that they are certain that they cannot do? I think not. Instead, these imposter’s might be more accurately slotted into a number groups based on their predilections and development habits.

The Inexperienced:

We’ve all had our first job. If it has been a long time for you, think back on that first three months. How many mistakes did make? How many lines of codes did you write that you would never allow through a code review today? Would it be easier to simply count the lines that would pass? Does that make the you from the past a fraud?

You don’t just have to be a green developer to be inexperienced. Consider moving to a completely different stack or platform. Are you used to working in a garbage collection language? Good, try some C++ or Objective-C (iOS and no cheating with ARC). Are you a Ruby on Rails guys? Awesome, how about you boot up the Windows box in your closet and clone your awesome Rails app in MVC 3? My point is not to be snarky but to point out that we were all beginners once and can be again if forced to move out of our respective comfort zones; though I do feel that it is a good idea to constantly play with technology outside of your comfort zone just for the educational value.

The Stubborn:

We all develop particular coding styles, habits, and a level or comfort with the technologies we use everyday but what happens when these things become hindrances to our development work? Well, many of us become stubborn not wanting to change our ways; this becomes particularly troublesome when the ‘Jr’ is dropped from our job title or replaced with a shiny new ‘Senior’ or ‘Lead.’ Of course, I am not arguing that we should all ditch our proven methods and tools in favor of whatever is trending on Twitter. However, we should be open to evaluating these trends; even if technology X turns out to be just another fad, there is always an educational value in learning something new.

The Optimistic:

There is always learning to be done when starting a new job or project, but have you ever been tempted to say “I can just learn technology X before I start the job and be fine.” Well, if you have you have more likely than not picked up that technology fine and are happy with your new position congratulations. If not, you are part of the reason for the “fraud” scare. Basically, you mislead your potential employer or client and have now put them and yourself in the awkward position of having to deal with reality. I’m not one for judgments, but keep that in mind the next time you fudge a resume to get that next big project. More likely than not, if you have been guilty of this type of fudging, you meant well and truly believed that you would pick up what you needed to know in time, but were mistaken.

The picky among you will be quick to point out that what I call “optimism” is in fact a form of fraud. I would argue that fudging one technology or tool though misguided is not the same as claiming to know something as big as an entire programming language when you do not. Am I too soft here? Are these people really just outright frauds? Let me know what you think on Twitter (@dominucco) or Google+.