One Mac or Two?

Apple's new Retina iMac and my own aging work hardware have me thinking a lot about multi-computer setups lately. Right now I have two machines:

  • A mid-2011 Mac Mini that I use mostly for work. 8GB of RAM and an aftermarket Fusion drive courtesy of iFixIt's handy dual drive kit. It's hooked up to a 23" monitor on my desk.
  • A late-2012 13" Retina MacBook Pro, also with 8GB of RAM, which is mostly for personal stuff. I'll occasionally use it for work when traveling.

The Mac Mini in particular is really starting to show it's age, and the Retina iMac is VERY tempting, due both to the amazing screen and much improved performance. But the thing that keeps nagging at me is whether I want to keep maintaining two machines at all. To try and sort it out, I've put together a few pros and cons of a two-Mac setup:

PRO: Keep work and personal stuff separate

As an independent developer, it's important for me to maintain some separation between work and non-work. That means closing down work projects at the end of the day so that I can focus on the rest of my life without getting sucked back into work. With two computers, that's pretty easy: Put the work Mac to sleep and walk away. With just one machine, that's harder. (I think the best way to deal with this is probably to use two user accounts on the same Mac, one for work and one for everything else.)

CON: Maintaining separate development environments

Two machines means twice as many things to keep up to date. Software updates, Xcode betas, and especially provisioning profiles are a pain to deal with across two Macs. Unfortunately, Dropbox or iCloud drive don't help much with this kind of thing, and we don't have the kind of tools for replicating dev environments that are so useful when doing web development.

PRO: No docking/undocking

With two Macs, there's none of the docking/undocking setup involved with hooking a laptop up to an external monitor. I don't want to just leave my laptop on my desk all the time - that defeats the purpose. That means every time I want to move it, I have to unplug a few cables before I can take the laptop with me. When I come back, I have to reconnect everything. Even with something like a HengeDock (which I've used before and liked a lot), there's the inevitable window rearranging and other snafus that come along with changing screen resolutions.

CON: Cost

Having two Macs means keeping the hardware for both up to date. I could probably let my personal laptop slip a bit, since it usually doesn't have to do anything too intensive. Combine that with the old adage that you should buy the best Mac you can, and it's possible to stretch the lifetimes out a bit. Nevertheless, maintaining two computers is always going to be more expensive than maintaining just one. (Then again, that iMac is pretty pricey...)

PRO: Desktop Retina

Right now, the only real way to get a Retina display on the desktop is with the Retina iMac. That doesn't seem likely to change in the near future. Maybe Apple will release a standalone Retina display and update the Mac Pro to support it, but I think it'll be a while before a laptop can drive that many pixels. Marco Arment's guess of 2016 for a standalone Retina display sounds about right to me. Granted, Retina isn't a must-have feature, but it sure would be nice, especially when developing for iOS devices that all have Retina screens. Those simulators take up a lot of screen real estate!

WILD CARD: Desktop + iPad?

I've always had a Mac for personal use. But I wonder: Do I really need one? Could I replace most of my personal laptop usage with the iPad? (Somewhere in Italy, Federico Viticci is cheering.) If so, that mostly negates a lot of the con arguments. Maybe I'll give that a try for a while and see how it works out.

Have thoughts or experience about resolving this dilemma? Let me know on Twitter!

Quibbling with iOS 8's Location Permissions

Apple changed the way apps can ask for permission to use your location in iOS 8. Previously, apps simply asked for permission to use your location, which you could allow or deny. If you allowed access, the app could use your location anytime, even while it was in the background. Once set, the permissions could be changed in the Settings app.

In iOS 8, Apple added some additional granularity to location permissions. Instead of a one-size-fits-all permission, apps can now choose whether to request access to your location only when the app is in use, or all the time (even in the background). It's a great distinction to make from a user's standpoint, because it means you have more control over your privacy.

However, there are a couple of catches that make for a bad user experience. Apps can only ask for one level of access, and can only ask once. Developers have to choose how much access to request. Once you've asked for "when in use" authorization, for example, you can't ask again for "always" permissions. You also can't display a dialog asking the user to choose between "Always," "When in use," or "Never." (You could work around this with some clever use of dialogs, but it feels a little hacky.)

The best practice Apple is promoting to developers is to ask for the least permissions you need at first, then move up in response to user action. In the WWDC talk about Core Location this year (session 706, "What's New in Core Location"), Apple used the example of an amusement park app. Most of the time, the app only needs "when in use" authorization to show your location on a map, so it asks for that level of access at launch. But there's also an extra feature that uses region monitoring to let you know when you get near specific attractions, even when the app is in the background. For that feature, the app needs "always" authorization. The problem is, once you've granted "when in use" authorization, the app can't prompt you for more access.

Apple's solution to this problem is to let developers send users to the Settings app so that they can change the location permissions for your app. This feels like a classic "sweet solution." It's not a good experience to boot users out into the Settings app, even if it's directly to the settings for your app's location permissions. It breaks the user out of your UI, and there's no obvious way to get back to your app after they've changed settings.

I'm sure the goal here is to prevent apps from constantly badgering you for access to your location. It's a good goal, and one Apple should stick with. The problem is, in the pursuit of that goal they've created a disincentive to follow the best practice. Because you can only ask for location access once, developers will feel that they need to ask for the maximum permissions they think they'll ever need, even if a minority of users will ever benefit. Moreover, a dialog offering to send you over to the settings app to change a preference doesn't feel like much of an improvement over a dialog asking for greater location permissions.

A relatively minor change could resolve a lot of these problems: Allow apps to ask for permission once per type of authorization. If a user allows "when in use" authorization, then allow the app to ask for "always" authorization later. (But only once. If the user says no, that's it.) If the user denies "when in use" authorization, the app can't prompt again for any level of access. If the user denies "always" authorization, allow the app to ask for "when in use" authorization instead, or give them both options at the outset.

Apple was smart to give users more control over their location data in iOS 8. It would be even smarter if they tweaked the implementation so that choosing those permissions is a better experience.

Self-Inflicted Wounds

Russell Ivanovic, hitting the nail on the head:

Tim Cook keeps telling us that ‘Only Apple’ could do the amazing things it does. I just wish that Apple would slow down their breakneck pace and spend the time required to build stable software that their hardware so desperately needs. The yearly release cycles of OS X, iOS, iPhone & iPad are resulting in too many things seeing the light of day that aren’t finished yet.

One thing that's striking is how many of Apple's troubles are self-inflicted. Gone are the days when Apple planned product announcements around conferences like Macworld Expo. That the company controls its whole ecosystem, from hardware to software to services, is supposed to be a strength. Controlling everything should mean that you can get all your ducks in a row before pulling back the curtain. The only thing that Apple is truly constrained by are its own self-imposed deadlines. The problem is, Apple keeps shooting itself in the foot. Rather than waiting until a new version of iOS is fully finished, for example, they rush an update out the door to coincide with the release of new iPhones.

Of course, new hardware usually requires some updates to support it. To deal with this, Apple could decouple major iOS releases from hardware releases. For example, release an iOS 7.2 update to handle the larger screens of the iPhone 6 and 6 Plus, without all the other stuff in iOS 8 like extensions, HealthKit, etc. A more fully-baked iOS 8 could be released later. The change would undoubtedly be difficult for developers, but Apple usually (rightly) chooses what's best for users first and whats best for developers second. (This might even have a side benefit for developers, by potentially uncoupling Xcode releases from SDK versions as well.) It's also worth remembering that Apple has done this before, when the iPad was released in 2010. It wasn't ideal from a developer perspective, but it was workable.

Another option is to slow down the OS release cycle. It's not hard to imagine Apple setting up a rotation where iOS and the Mac OS get a major update every other year. Those cycles could be offset by a year: iOS 9 in 2015, Mac OS X 10.11 in 2016, iOS 10 in 2017, etc. On "off years" between major updates, the company could do point releases to introduce minor features and support new hardware, especially on iOS. Both operating systems are sufficiently mature at this point that they don't need yearly updates. Sure, it might be nice, but not at the expense of overall quality. I'd rather have a polished, stable product that I can rely on than a buggy bunch of features that I can't.

It's time for Apple to stop setting itself up for failure. At the same time, it can do right by users and make sure that people still get the "it just works" experience they deserve.

Thinking Long-Term About Apple Watch Apps

I haven't had much to say about the Apple Watch because there are so many things we don't know about it yet. But there are some interesting things to speculate about. One idea that's been floating around the Apple developer community is that apps for the Apple Watch will be extensions of iPhone apps. They'd run on the Watch, but be downloaded and installed as extensions of an iPhone app, and get to take advantage of the shared data container used by iOS app extensions.

At first blush the idea makes sense, but the more I think about it, the more I don't think it's the way Watch apps will work. Ben Thompson pointed out that Apple is thinking long-term about the Watch.

This approach – the one that Apple chose – allows the hard work of UI iteration and app ecosystem development to begin in 2015. Moreover, that iteration and development will happen with the clear assumption that the Watch is a standalone device, not an accessory. Then, whenever the Watch truly is standalone, it will be a complete package: cellular connectivity, polished UI, and developed app ecosystem. It will be two years closer to Digital Hub 3.0 than Alternative #1 or #2.

The tradeoff is significant confusion in the short-term: the Watch that will be released next year is not a standalone device. It needs the iPhone for connectivity. To be clear, this is no small matter: the disconnect certainly tripped me up for a week, and if the feedback I’ve gotten is any indication, it continues to befuddle a lot of very smart people.

Although today the Watch requires an iPhone for connectivity and other assistance, Apple is clearly looking forward to a day when it does not. With that in mind, it would be silly to constrain apps to mere iOS extensions. Why design an app ecosystem around the presence of an iPhone if your long-term goal is to make the watch a standalone device?

I suspect that Watch apps will be installed and managed via a connected iOS device, but will run more or less autonomously. Something like Handoff will be used to pass data back and forth between the Watch and the iPhone. In the short run, Apple may also offload computational tasks to the iPhone CPU to save battery, but that will happen behind the scenes in such a way that developers don't have to give it much thought. Over time, as the Watch becomes more powerful, it will gradually hand off fewer tasks to the iPhone. By making Watch apps independent of an iOS app container from the outset, Apple can make this transition as seamless as possible.

Of course, we'll know a lot more once we get a look at the SDK for Watch apps, hopefully sometime this fall or winter.

Method Naming in Swift

I've been struggling to come up with a coherent response ever since I read Radek's article on naming methods in Swift. While he makes a number of good points, I keep coming back to a deep-seated anxiety about moving toward shorter, less descriptive method names in Swift. The crux of my discomfort comes from this section of his post (emphasis mine):

Code is not made equal. It would be a mistake to apply the same naming convention to everything. Some methods are generic and broadly used. Some are rare or domain-specific.

Consider these functions and methods: map, reduce, stride, splice. These are very short and easy to remember. They’re not exactly self-explanatory, but that’s okay, because they are standard library methods that you’re going to use many, many times so the benefit of a short name is greater than the price of having to learn it first.

On the other side of the spectrum, there’s your application code. You probably don’t want to give methods in your view controllers obscure single-word names. After all, they’re probably going to get called once or twice in your entire codebase, they’re very specific for the job, and once you start working on another class, you’ll totally forget what the thing was.

I hope you see where I’m going with this. If you’re writing library/framework-like code, or some base class that’s going to be used throughout the app, it’s okay to use short and sweet names. But the more application-specific it is, the more you want clearer, more descriptive, intention-revealing methods.

I don't think this logic pays enough attention to the fact that someone else will very likely be working with your code after you write it. Even if you're a single developer, you might eventually pass that code off to someone else – maybe because you got a new job, or added a partner, or sold your app to another developer. Best practices are important because they help us make our code accessible to the next person who works on it.

I also think developers should take care to make a language accessible to people who are new to it. That's especially true right now with Swift - we're all new to it! Keeping the language accessible might mean trading off some conciseness in favor of descriptiveness. Consider the examples above of concise standard library methods: map, reduce, stride, etc. You have to learn what they do before you use them. That's a barrier that you have to overcome before you can start working in the language. The more of those there are, the higher the barrier becomes. As a developer community, we're best off keeping those barriers as low as possible.

Radek rightly points out that there's a spectrum, and the more narrowly tailored your use case is, the more appropriate a concise method name might be. But I disagree with the idea that short method names are appropriate in a base class used across your app, or even in framework code. Even methods that get used all the time should be readable. Doing so reduces the amount of time that a new developer has to spend digging through your source code in order to figure out what's going on. (I shudder to imagine all the command-clicking to figure out what things like funnel or fetch or grab might do.)

I propose that the only real place where very short, concise names make sense is in the language itself. Essentially, this refers to things like map that are already built into Swift. We've got autocomplete to help us with the rest, and Swift already reduces some of the extra typing from Objective-C.

Why does this all matter, and why have I been ruminating on it for the past couple of weeks? Because Swift is new, we have a unique opportunity to shape its conventions. I've done some Ruby programming in the past, and although it's a nice language, I don't want to develop iOS apps in it. I think the iOS and Mac developer communities will do themselves a favor by sticking to longer, more descriptive method names in all but the rarest of circumstances.