Why I love developing for the Apple TV

tvOS isn’t getting the credit it deserves. I've found Apple TV the platform that I’ve had the most fun developing for in a long time.

The television is the physically largest canvas currently available.  What has been a central point of the living room has historically been the most difficult thing to break into.  The number of people willing to build awkwardly horrible “Samsung apps”, for example, are very few.  Even the Roku, previously the most accessible developer platform, has a very small developer footprint.  I wrote about developing for Roku previously.  Having the power, flexibility and ubiquity of UIKit allowing you to turn this canvas into anything you can imagine is really very gratifying.

But just being able to build for the big screen alone isn't why I enjoy developing for the Apple TV.  There are real things that make tvOS apps better than their mobile counterparts.

As a developer, how many times have you sat with your designer talking about the tap target sizes of UI elements in your iOS application?  I’ll venture to guess it’s more than you’d like.  But this isn’t a problem with tvOS’ Focus Engine.  Anything that’s selectable is made obvious via the elements as you explore each screen with your remote.  It doesn’t matter how small the element is, with the Focus Engine it’ll jump out at you.  The defaults with UIKit elements literally make these items larger when selected, and with UICollectionView, you can directly manipulate them via remote gestures.  This is even better than direct manipulation on a phone or tablet surface where your figures cover up the element you’re interacting with.

Image courtesy of https://medium.com/@flarup/designing-for-the-apple-tv-5992c3aab1e4#.71q02v28u

Image courtesy of https://medium.com/@flarup/designing-for-the-apple-tv-5992c3aab1e4#.71q02v28u

I'll be the first to admit the navigation paradigms of iOS applications have a lot to be desired.  It really does cause cognitive overhead for a user as sometimes screens “come from the side” in a Navigation Controller stack, but others “come from the bottom” as a presented modal.  And then there’s nav stacks within modals and gestures with custom transitions all adding a whole other level of what seems like no rhyme or reason to the user.  This has been simplified with tvOS.  If you navigate to a new screen, regardless of how the developer is presenting it, there is no difference in the appearance.  Pushing to the stack looks the same as if you’re presenting a view controller.  And what’s best, the Menu button always brings you back to where you came from, regardless of how you got there.  The developer may be forking you off in any number of navigation trees, but the user doesn’t have to know or care about it.  You can save those discussions about your overly complex navigation architecture and go get a nice coffee with this time instead.

Of course there are exceptions to everything I've stated.  You could easily create custom focused appearances that makes something hard to find, or any number of crazy navigation schemes.  But the point is that tvOS was built for simplicity.  Sure, every mobile app developer will tell you they're trying to get you to the most important thing in the least number of taps possible, but it's obvious that's only partially true.  iOS applications don't usually adhere to the "If it's not important, leave it out" category.  tvOS apps (at least currently) do.  Content is the priority, not share sheets, find friends and "OMG Rate Me on the Appztore Plz" popups.

I've only had the privilege of building one application for The Apple TV, the same one I built for Roku, The Bat Player, and it's been a fantastic experience.  I encourage you if you have an idea that works for the big screen to build it.  Especially applications, since there's currently a ton of games, but not a lot else.

Feel free to reach out if you're building something cool for the Apple TV, I'd love to hear about it.

Goodbye, Rdio

Like everybody else, I felt like I had heard every song on Pandora by the time I discovered Rdio.  I was user number 4075, a paid unlimited user, signing up on June 6, 2010.  I was really happy to hear anything I wanted.  I used the Rdio Desktop client to create a collection on the service with my local iTunes Library.  I listened to it a lot.

Like many others, I moved to Spotify once it finally came to the US.  I don't recall why exactly, but if I know myself like I think I do it was simply because I wanted to try something else.  But I always thought as Rdio as the service for music lovers.

I ended up with a role at Rdio due to an acquisition.  They were bringing in the TastemakerX team.  There were other places the TMX team could have landed, but I wanted it to be Rdio.  It was perfect.  Just the right size, opportunities for growth, and most of all I'd be working on a product that people really, really loved.

I worked on handful of very different things while at Rdio.  When I joined they were midway through a major redesign.  I jumped in cranking out parts of the UI that still needed to be done.

I did a hardware integration for Jaguar/Landrover using Bosch's technology.  I doubt many people have seen that work, but it was cool to be working with hardware.

I think I was the only engineer that started and finished the Rdio Live project.  Live wasn't a popular feature within the company, and I understand that it only happened for political and business reasons.  But by the the time it launched people thought it was cool.  And it was.  We got terrestrial radio seamlessly integrated into an on-demand service.  It worked way better as an experience than I thought it would.  It even brought in new users.  I was told it was the most friction and hassle free launch in Rdio's history.  So that's something.

The mobile team was small.  Probably too small.  But I preferred it that way.  I got to work on every piece of the codebase.  Sure, the original app was in C#, due to a decision by people no longer at the company, to utilize Xamarin, but the current team didn't want it, or like it.  So we started rewriting it.  We were building from the ground up, in Swift (with some Objective-c), a new iOS application that will never see the light of day. I was focusing on the playback engine, something I've never done a deep dive into before.  As a side effect of Swift being new, and C# being new to me, this means I developed in two new languages in under a year and a half on a daily basis.  So again, that's something.

I may be the only person who received an offer from Pandora who feels about 50/50 if they will accept it or not.  I always told people Rdio is about as large of a company as I'd want to be at.  When I started I think it was just around 150 people.  It's probably closer to 200 globally now.  Pandora touts 1800+ employees.  I asked representatives to explain how I won't get lost in a sea of engineers.  They couldn't tell me I wouldn't.  They couldn't tell me I'd matter.

There's a possibility I'm more touchy about this than the rest of my team because this is the second time I've been acquired in two years.  Maybe I feel like I'm being pushed around with no control over my own destiny.  That I can be bought and sold to the highest bidder.  I don't actually feel this way, I can walk away at any time.  But the fact that it's come to mind at all means there might be something to it.

Rdio's shutdown isn't just a loss to me, or the people I share an office with every day.  It's a loss to the world of music.  Rdio was the service for music lovers, by music lovers.  In this world of The Streaming Services vs. The Music Business it's not enough to love music to keep the lights on.  There's no guarantee Pandora can do any better.  They're the ones, prior to their IPO, who were petitioning the copyright royalty board every few years saying they'd go out of business if rates were hiked up.  They're trying to pay less to artists.  The same artists who are already trying to put the streaming services out of business.  So I'm not sure how this can end well.

Regardless of any of this, Rdio will be a thing of the past soon.  A story about a fantastic product sold in a fire sale because the music streaming industry is brutal.  But I'm really lucky to have been a part of it, even for a short time.  Goodbye, Rdio.  You had too much empty whitespace.

Launching The Bat Player on the Roku Store and its aftermath

After a year of development The Bat Player went live on Roku's Channel Store.

The Bat Player first went public in September of last year.  This means if you knew the right Roku Channel Code you could install it on your device.  I put up a web site to make this easier, and shared it with a couple specific communities like the Roku forums and Reddit's Roku subreddit.

My goal all along was to get it available in their mainstream directory.  But between their approval delays (It can take months for them to get you in their queue) and many requested change iterations, it took some time.  Add a period early this year where I took time off trying to get it approved, and you'll see why it took so long.

But that's not what this post is about.  It's about what happened after it went live.

But first let's quickly go over how The Bat Player works.

As far as the audio aspects go it's a pretty dumb application.  It doesn't really know much.  Due to Roku's limitations with streaming protocols it doesn't know even when a song changes.  To overcome this I built a service, known as The Bat Server that can be ping'ed with a URL to find the title of what's playing on a station via a variety of different methods.  This part I actually broke out in to my first public Node.js module.

Easy enough, but that's not what I wanted The Bat Server for.  I wanted contextual, useful information.   So knowing the artist name allows me to grab some relevant information from Last.FM's API.  That too, is easy enough.

Song information is more difficult.  I query a few different services (Last.FM, Discogs, Gracenote, Musicbrainz) to get an array of possible albums a track could be from.  Then I try to make an educated guess based on available information.  This whole song and dance is pretty expensive.

On top of this the client makes separate calls to The Bat Server to do some image manipulation for a few things (dynamic header images, station icon resizing, building backgrounds and artist images) via ImageMagick on the fly. 

All this happens every time a new song is seen by The Bat Server.  It's not efficient, it doesn't work at scale, and it was living on one tiny server.  But I only had about 800 installs.  So I figured eventually when I got approved for the Channel Store I'd get a couple hundred installs more one day, half of those people would try it, and things would level off and I could reassess down the road if needed.  I underestimated the Roku Channel Store.

Once it was available thousands of new users brought The Bat Server to its knees the first day.  My single t2.small EC2 instance was not up to the task, though in retrospect I did have a few things early on that I'm really thankful for now.  

I cached everything.  Data gets saved to Memcache, so an Artist, Album, Song only gets processed once.  Also, images get pulled through Amazon Cloudfront.  So those crazy dynamic images eventually make it to the edge servers and won't touch my server again.  And lastly I put Varnish in front of the Node.js service as a way to have a handle on how long a previous request can stay as a cached result before we go to the internet radio station to determine what's playing again.  If all fails I can dial up the cache duration.

I also added centralized logging and Analytics not just from the service, but also from the clients themselves. I did this with an Amplitude library I released and a syslog client I built into the client. Both written in Brightscript.  I wanted insight on what was going out in the field.

But no amount of caching or logging was going to save this.  Luckily I was already using EC2 for my existing instance, but now it was time to put myself through a self-run AWS Bootcamp.

First off, I needed to handle the new load.  So I created a new t2.large and an Elastic Load Balancer.  I pointed DNS for the existing server to the load balancer and put the two servers behind it.  Things became much happier quickly, but I knew this was only a stop-gap measure.  I needed some auto-scaling action.

Since I added the t2.large I bought myself some time to figure out what the best AWS CloudWatch metrics would be in my situation to scale capacity.  At first my alarms were a bit too aggressive, causing servers to go in and out pretty frequently, but I eventually leveled them out, finally feeling confident enough to pull the t2.large out of the load balancer and let the system manage itself.

With this new infrastructure I needed some insight into it, so between Runscope for API monitoring (I so love this service), New Relic for server monitoring, Papertrail for the above logging, Rollbar for Node.js application error reporting, and Datadog I was able to put it all into one nice little interface.  Add Zappier and Pushover with some webhooks and I now get custom alerts right to my Apple Watch.  But man am I paying for a lot of services now.  I hope once some time passes without any incident I can get rid of some of these services.

So now what?  Well obviously things aren't perfect yet.  Each server has its own instance of Varnish, so that's pretty stupid.  But that's a side effect of my reactive-scaling.  So maybe I'll [Varnish] -> ELB -> [Servers], but that would require an additional, full time EC2 instance just for caching, plus I read that putting an ELB behind Varnish isn't terribly awesome.  I never thought I'd miss managing F5 BIG-IPs!

So what did I learn?  The obvious, really.  If you're going to build a single-homed, power hungry service, be prepared to scale horizontally.  And what I did to fix it wasn't novel, it's what these AWS resources were created for.  I also learned that the extra effort I put in up front both for analytics and logging made it easier to know what was going on.  And lastly, utilizing the Right Tools For The Job™, in this case memcache, varnish, and the array of services from AWS, really allowed me to take something that would have only handled a few hundred users initially to something that I feel confident I can scale out to any size.

Enjoy the tunes! 

Developing for Roku

Ask yourself, "Do I know anybody who's developed anything for the Roku Media Player?"  More than likely the answer is no.  If you ask others you might get get some response, but more than likely nobody you know has done it either.

And that's why I wanted to throw up a little piece about building for the Roku.

Chapter 1: I want to listen to Streaming Internet Radio.

Marantz audio reciever

Marantz audio reciever

For me it started innocently enough.  I wanted to listen to streaming internet radio.  I had been listening using my Marantz home theater receiver.  This was ok, I guess.  The interface looked like this:

Not awesome.

My next thought was the AppleTV.  I use my AppleTV for everything else.  The interface is pretty good, but not great.

AppleTV

AppleTV

But then I learned of a major deal breaker: You can't add your own stations.

So I looked at other options.  Run an iOS app and AirPlay mirror it to the AppleTV?  While that works that's not elegant in any stretch of the imagination.  Build an HTPC?  Get a Raspberry Pi?  All valid options.  But then I remembered that Roku had a ton of available content.  Surely it must have what I'm looking for.  Next thing you know I have a Roku 3 on order.

It turns out I was right.  The Roku had a ton of applications (they call channels) that focused on music and streaming audio.  Here are some of them:

With the SHOUTcast Radio application being the only one I could find that I could add my own stations, I was a step further than I was before.  But, as you can see, a lot of Roku apps have a very templated feel to them.  I later learned why this was.

I started thinking of other features I'd really like.  Last.FM Scrobbling, Adding to Rdio playlists, updating my room lighting with my Philips Hue system.

Chapter 2: Why Don't I Just Build My Own?

Before I knew it I was browsing around the Roku Developer portal.  Things I learned right away:

  1. They have an open SDK.  Cool!
  2. You write in a language called BrightScript.  WTF is BrightScript!?
  3. Callbacks and async stuff happens through a "message port".
  4. You have to support SD and non-widscreen TVs.
  5. They have an approval process like Apple to get on their channel store.

Sounds fair enough.  Let's go through this.

But here's my disclaimer:  I've developed one Roku channel, and there are people who know much more about it than I do.  I did about as well as anybody who's building something for the first time with a technology would, and I'm only giving my initial thoughts.

BrightScript is fundamentally a bastardized version of BASIC.  It's a language so obscure often a Twitter search for the word will bring up zero results.  Roku's SDK provides you a limited set of native objects like Arrays, Associative Arrays, an HTTP client, Bitmaps, etc.  You create one of these objects with a simple syntax of

myArray = CreateObject("roArray")

Everything starts with ro and it's weird to create an object by it's string name.  But whatever.

But those objects are it.  You can't subclass, you can't create your own first class citizens, and you are at the mercy of the SDK.  

So without being able to specify custom objects it seems that any Object Oriented approaches are out the window.  Oh no my friend.

This is where your new best friend, the roAssociativeArray comes into play.  You'll use it for everything.  And you'll use it for this too.

Let's say we wanted to build a simple cache class.  We'd do something like this:

So let's go over this.

  • Your "class" is really just an associative array.  ( {} is the AA shortcut, like in other languages).
  • m is a weird magical variable that points to your associative array as long as the function was fired via the declaration in your "class".  In this case you obviously just can't call "cache_add" directly, you'd have to call YourCacheInstance.Add in order to get the magical m.
  • But cache_add is still completely pubic and accessible and totally confusing to people who may be reading your code.  You shouldn't access it directly, but that doesn't mean you can't.

Ok, so what's this port?  It's roMessagePort.  And it's stupid.

Have you ever written a video game using a game engine?  You have the game loop, where every cycle of the loop you can do things like move a sprite, check for collisions, make the blue box turn red... that kind of thing.  Where in modern application development that kind of thing is abstracted from you this is mostly how Roku applications work.  With one big event loop.

So if you have a screen that you expect user interaction with first you create a roMessagePort object, assign it to the port of your screen, and then start the loop waiting for something to happen.

Here's a simple example:

And yes, you don't use double equals for comparison.

"But what if you make multiple roUrlTransfer requests and they start coming through a port.  How do you know what each is connected to!?" you're currently saying in disgust.  Well our friends at Roku has come with a way to work around that obvious oversight.  They added a GetIdentity to its interface so you can compare request data coming in to the original request to see who owns it.  Otherwise you'll be saving FancyPhoto.jpg's data to a file called importantdata.json.  So you'll have to keep the original request data around somewhere while you wait for responses to come in.  (If it's an async request you have to keep an active reference somewhere or it'll die in transfer anyway, but that's besides the point).

So with the basics of how you work with data out of the way let's talk about the UI.

Remember seeing a bunch of screenshots above that all look similar?  Say hello to roSpringboardScreen.  Just another piece of the SDK.  You pass it some data and it lays it out to look like that.  It's easy, it's fast, and it gets the job done.  Roku supplies a ton of screens that do this type of thing.  roParagraphScreen, roPosterScreen, etc.  The obvious downside is almost every Roku application looks the same.

So I took the approach early on that I wanted to do something different, especially for the "Now Playing" screen.  So I jumped at roImageCanvas.  It does what it sounds like.  You can put images on the screen wherever you want.

roImageCanvas has a lot of niceties.  A built in HTTP client so you can point to URLs instead of having to download things and handle local files yourself and being able to pass a simple array of things to draw are two of them.  For most things this really should be good enough.  But once I implemented my Now Playing screen with it I found I wanted even more control and flexibility.  This is when things got dark.

Working with Roku's roScreen is taking that game loop comparison to the next level because it's literally for building games on the Roku.  But there's no middle ground between roImageCanvas and roScreen.  If you want to do any kind of dynamic visuals on your screen you're giving up doing things Roku's easy way.

So let's talk about how to do some simple things with roScreen.

  1. Want to move a box across the screen?  There's no way to ask an object where it's location is, so when you draw something to the screen keep note of its coordinates.  Next time the loop hits you increment that number.  Then redraw the object with the new coordinates.
  2. Want to fade something in?  Draw it to the screen as completely transparent.  Then much like the previous example, keep incrementing the alpha value at the rate you want until it gets to the alpha you want.

But you better watch out.  Doing things like drawing text and drawing alpha'ed regions are expensive for the Roku to handle.  Luckily roScreen does double buffering so you don't see a flicker while it's cranking through all of that heavy work like making words appear.

For example, you'd want to do this on every loop if you want to do any movement.

But if it's static, and you have no reason to keep redrawing the screen, you don't have to.

This is when that requirement of supporting SD and non-widescreen displays comes into play.  You have to make sure your UI within this screen lays out properly.  Where if you were using any of the SDK canned screens like roSpringboardScreen all this would happen magically for you.  Even roImageCanvas has support for this kind of thing.

Chapter 3: Testing Your Channel

And how do you test on old screens?  Well if you have a Roku 3 it doesn't support analog/non-widescreen/SD, so you'd be like me and purchasing a second Roku, a Roku 2.  After discovering there are major performance differences between the Roku 2 and Roku 3 you'll probably also want to purchase a Roku Streaming Stick in order to verify your application on all the hardware.

But after all that hard work you'll want others to try it out.  And much like the Apple App Store, having your work publicly available is a source of personal pride.

Unlike Apple, anybody who builds a Roku channel can make it publicly available.  You upload it to the Roku developer portal and they give you a link.  Via that link anybody can install it on their device.  This is great for getting a small group to test all your hard work.

The packaging process is simple.  You zip it up, you upload it to your personal Roku, you type in a password that was generated previously, and you download it from your Roku to upload to their portal.

Chapter 4: Distribute On The Roku Channel Store

But just having it available isn't enough.  You want it on their Roku Channel Store.  Just like the Apple App Store that's where people discover new applications and where you'll get the users.  But it'll have to get approved first.

With Apple a worst case scenario is about two weeks unless there's some special reason they aren't approving you.  With Roku, let me put it this way: I uploaded my application on September 11th.  That's right, it's been 88 days since I submitted for approval.

I did hear from them once about a month in asking me if I could upload a new version that listened on a different HTTP port number in my code.  I did, and then I never heard from them again.  I email them about once a week asking for an update, but never a response.  This kind of treatment is enough for me to say "Screw Roku!" and why I'd probably never build a second application on their platform.

All this on top of the small things you might forget about when building for an obscure platform.  Anything you've built before probably has a drop in analytics library so you can get your Google Analytics or whatever going without any effort.  For me I ended up building a BrightScript analytics package for Segment.IO.  Want other 3rd party features?  Those SDKs they provide aren't going to work here.  For me I integrated Rdio, the above Segment analytics, Philips Hue and Last.FM on top of my own custom APIs.

So in closing, I'm certainly not saying that you shouldn't develop for the Roku if you're thinking about it.  On the contrary, it's a really fun challenge to build using an environment that's probably completely different than you're used to.  And if it's a pure numbers game, there are more Rokus attached to TVs than Nexus Android Players or Fire TV boxes combined by an order of magnitude.  It's not even close.  But keep your expectations in check, keep your feature set low, and expect to get creative with the solution you come up with.

Or just wait until you can write for the AppleTV.

 

Building TastemakerX for iOS

It's still in beta, and I'm still happily iterating over all of the features of the application, but I wanted to share the tools I used, the services I take advantage of, and overall tidbits that might be useful to fellow developers about how I've been building TastemakerX.  Maybe you can in turn share some things with me.

Tools that made my life easier.

Spark Inspector

Spark Inspector

Spark Inspector

Both it and Reveal popped up around the same time, but since then I've been using Spark Inspector as a tool to debug view-related things.  We've all done it, make a view's background bright green or whatever so it'll stand out.  Now I skip that as this allows you to visually troubleshoot, move the frame of views around, change colors... basic stuff, but very helpful.

We've all had the "That view is supposed to be right there!".  So you jump in to 3D mode, see where it actually is, and go from there.

 

Cocoa Pods

I was pretty hesitant to jump into Cocoa Pod land.  If you're not yet familiar, it's a manager for dependancies.  Somewhat akin to a "package manager" for Objective-C libraries.  Like gem, or npm, pear in other languages.  I saw it as exchanging Objective-C dependency hell for Ruby dependency hell.  And frankly, I prefer the Objective-C version.  But I experimented with it on some test projects, and saw that I had no issues.  So I made the decision to use it with TastemakerX.  I'm glad I did.   I do a "pod update" after each release, and I know I'm always up to date.  It also makes experimenting with third party libraries simple.  Add it to your Podfile, "pod install", play with it.  If you don't like it, remove it from the Podfile and move on.

Cocoa JSON Editor

Cocoa JSON Editor

If you're building the client, but not the API there's often a stage of experimentation.  This is where my experiments took place.  I was able to quickly make requests against the APIs and figure out exactly what it was I was trying to get back.  Not just for our internal API, but third party services as well.

It also really helps the scenario when you need a mocked up API endpoint.  It allows you to create data models, and the built-in web server will map it to a path.  Point your client to it and off you go.  Within seconds you went from a non-existent API to an endpoint with mocked up data.

Cocoa JSON Editor

Cocoa JSON Editor

Colorbot

Colorbot

Colorbot

I kind of wish this was a built in developer tool with Xcode.  While I'm sure you're like me, and have your UIColor+CustomColors category, it's really nice to be able to visually organize the color scheme you have for your app.  You use the colors everywhere, so they should be available, definable, and exportable.

So your UIColor category works great in your app, but where do you go when you need that color in your image editor?  Or a CSS file?  Colorbot makes exporting a color easy as a NSColor, UIColor, hex, and more.

 

Third party tools for working with Core Data

Magical Record

Both this and mogenerator was suggested to me by a friend.  While I was hesitant to throw a layer on top of Core Data I experimented a bit and saw the advantages.  The big win with Magical Record is simplifying handling Core Data across multiple threads.  You really shouldn't be doing any Core Data stuffs on the main thread, so any way to make your life easier here is nice.  It likes using blocks, and so do I, so we became fast friends.  It'll give you the managed object context for the current thread you're in, and a block to do work in.  The downside?  Documentation.  It's kind of all over the place.  Certainly read through the header files.

mogenerator

A nice little utility that will take your Core Data model and create two class files from it.  One with all the stuff a managed object should have, and one subclass where you get to put all your custom code.  It really does make things more manageable. 

 

External Services

Runscope

Run, don't walk, to get yourself an account with Runscope.  At its core, it works as a proxy between you and your API.  Don't let that scare you.  Because it's the middle man it's able to capture, in detail, the activity of your application.  Each request is shareable.  So if in testing you see something weird you can simply log into Runscope, grab the url of that request and share it with whoever may be of interest.  Even if they're not a Runscope user.

Other really useful things include bringing attention to all requests that have resulted in an error or requests that took too long to execute. Now with Runscope Radar you can turn a request into an API test template to make sure that thing that was broke stays fixed.  Oh, and it pretty prints the JSON of each request.  Could you do all of this with in-house unit tests, a copy of Charles, and your clipboard?  Sure, but not this good.  It just works.  I wouldn't be overestimating if I said it has saved me days of work.  And the same can be probably said for my teammates who I send Runscope URLs to saying "Could you look at this?"

Runscope

Runscope

Crashlytics

If you're like me you've tried a few crash reporting services.  Also if you're like me you've kind of been annoyed by all of them in some way or another.  Crashlytics wasn't really on my radar until I went to the Twitter mobile dev event a while back.  They gave a demo of Crashlytics and it caught my attention.  It works just like any other crash reporting service, but the difference is it runs a service in the background on your build machine.  This may bother some people, but it's nice that it's taking care of grabbing the dSYM and keeping track of archived builds without me taking extra steps.  Because of this I've also not had to spend a single second manually symbolicating crash logs that Crashlytics hasn't been able to handle or because I forgot to upload the dSYM.

Another nice benefit, It allows you to set arbitrary keys as your user progresses through the application, so I know of things like the last artist id a user viewed, or a chart they were checking out.  Oh, and it's free.

Product Development

When I started at TastemakerX one of the first things we decided was that we'd be making a rather large change from the idea of the v1 product they had already built.  So we spent some time really knocking out what we wanted this product to be.  During this time I still wanted to be working on some useful code.  So I built some standalone sandbox apps to work out things like networking, image handling, Facebook Open Graph, playlist generation, etc as reusable classes I could seamlessly move over to a production app once we knew what the app would entail.

I know most projects don't have the luxury to start working on the application before they start working on the application, but it gave me a nice head start.  Not to mention it enforces solid design patterns when you're writing code not knowing how you'll be using it later.  There's no UI to contaminate your thinking.

 

Some things I've learned

I would hope every project I ever work on I'd learn something new.  This is no exception.  Maybe some of these might jog something you might want to look at.

  • Storyboards are fine.  Use them, or not.  I don't care.  This was my first non-sideproject storyboard from scratch application I've done and I made the mistake early on to try to *only* use the storyboard.  Know when it doesn't make sense, and go create your nib files.
  • Don't mix up objectWithID, objectRegisteredForID and existingObjectWithID.  Just saying.
  • You're probably not using NSSet/NSOrderedSet as much as you could be.
  • You're also probably not using NSCache as much as you could be.
  • Take advantage of iOS 7's performFetchWithCompletionHandler.  You have a background window of time to do some work.  So instead of just fetching new data, maybe also do some of the post-processing you would normally have to do when the UI is up.  Maybe pre-calculate the height of UITableView cells for this data and cache that for later, or if you downloaded some images that need to be resized take care of some of that.  But there's only so much you can do in this window, so be smart about it.
  • Speaking of UITableView, check out iOS 7's estmatedHeightForRowAtIndexPath if it makes sense for you.  It'll postpone the height calculation for a row until it's rendered instead of upfront.  This makes more sense on older devices, but on the flipside you might see some frames drop the first time the actual heightForRowAtIndexPath is called to calculate the real height when the row is being built.
  • Aside from the audio playback manager, almost every instance of KVO I originally implemented eventually got ripped out.  What seems like an elegant solution using KVO is probably just making things over complex.  Think about if that's really the solution for the problem.
  • I have all network calls go through a singleton and fire off completion blocks.  That's worked really well and in retrospect I wish I did the same thing with Core Data.  On the same topic, if I were to go back (and might!) I would never pass back managed objects.  Use Core Data for storage, and build standalone, separate objects out of them.  Each object can have a reference to its Core Data counterpart for CRUD operations.

So that's my little rundown on things I found useful or interesting so far with TastemakerX.  I'd love to hear things that you're using to make your life easier or things you wish you would have known when you started as well.  Have fun, mobile friends!

Iterate!

My thoughts and experiences on the Pebble Smart Watch

LIke with a lot of things in life I told myself I didn't want one.  And then when it came down to it I realized I was lying to myself and I actually did.

This was the story behind the Pebble Smart Watch.  Once people started getting them delivered, and I read this review I knew I must have it.  Luckily Criagslist exists and people were selling theirs.  Watch acquired.

@franktronic asked if I could write a mini-review of my experience thus far with the little gizmo.  I figured why not.  But you should also read the awesome Five days with Pebble by Danilo Campos as he's much more eloquent than I.

I'll be going at this from the perspective of an iPhone iOS 6 user, since that's my primary device.

Out of the box I was slightly frustrated quickly as I learned of the things that it couldn't do.  "I can get my Twee...oh.  I can get my Faceboo..oh.  Well, at least I can get my SMS."  Knowing that I couldn't get my Tweets on my phone wasn't an answer I could accept. So I went ahead and Jailbroke my phone ASAP to install BTNotificationEnabler.  Success.  Now I can get the notifications I want on my Pebble.  A huge difference.  Without this hack the Pebble sadly would be useless to me.  It's just not enough to dismiss phone calls and read text messages.

The next thing I experienced was a period of learning to trust the watch.  I'm used to my phone vibrating.  Now my phone and my watch would vibrate at the same time.  I'd glance at my watch, dismiss the notification, and move on.  But a minute later my phone would vibrate again... but not the watch.  Why is that?  I had the setting enabled to remind me about missed notifications.  The iPhone had no idea I saw it on my watch and therefore was going to remind me until I look at it on my phone.  I dealt with this a couple days until I figured that Pebble was here to stay, so I turned not only repeating reminders off, but I also turned off vibrate on silent.  My watch is now telling me what's going on.  No need for my phone to duplicate that.

For controlling audio apps on iOS is works great.  I feel it to be a bit odd that you have to launch the "music app" on it.  It would be nice for it to detect that audio is playing on the device and switch to that for you.  And when audio ends for it to switch back to the watch face.  But that's a minor gripe.  I've been using it to control both Downcast for Podcast listening and Spotify.

Not that I care about its watch, time telling, functionality, but it's fine.  You can download different watch faces.  My guess as time goes by there will be some really cool ones that become available.  Right now I'm using the "Text Watch" that everyone else who owns a Pebble seems to be using where it tells the time in a "Five Twenty Two" format.

I wish it would queue up notifications.  Right now if another notification comes in before you saw the previous one you won't see the first one.  Doesn't seem like a big deal, but if you're at dinner with somebody, and trying to be polite, you may not want to check your watch every time it buzzes.  In fact a motion that looks like you're checking the time every couple minutes may actually be more insulting than pulling out your phone.  Having your phone out all the time has sadly been accepted.

As far as battery life, it's lasting about as long as I'd expect.  I think around five days'ish for me.  They're saying it should be seven, but close enough.  However, I'm experiencing an issue where the Low Battery notification comes far too late.  It's supposed to come when I have about 24hrs left, but usually it hits me when I'm around 1hr left.  Without a charger at work (you can't get an extra cable at the moment), my watch will die before I get home.  I'd prefer to get that notification in order to charge it the night before it plans on dying on me.  This seems to be a bug, and I've sent them logs.

Aside from those couple small items it's been smooth sailing.  Someone looking for me on IRC?  My watch lets me know.  Someone mentions me on Twitter?  Even if I'm half way across the office away from my phone, Pebble is letting me know.  Delivery notification for a package sent to me via Push Notification?  Now it's on my wrist.  Not to mention the regular stuff like dismissing phone calls, reading emails, and seeing text messages come in.

And it has a backlight.