The 2016 Macbook Pro announcement is causing a significant uproar in the tech community. Michael Tsai sums it up pretty comprehensively here so I won’t go into detail, but I did want to put out one use-case that I couldn’t sum up in a single tweet.
My wife was just hired as an assistant professor at an “R1″ university, which requires research as well as two courses of teaching a semester. Most of her work can be done on an iPad including grading, reading, and writing papers, but when she’s in research mode she needs SPSS. Her data sets are not that large so she doesn’t need a ton of memory or an incredibly fast CPU, but just the fact that she needs to run SPSS means she needs a Mac. The university gives her a startup fund which includes her computer purchase, but also is used for research funds and conferences so the more she can save for that the better. However, she also wants a machine she can use for a long time. She needs a laptop since unlike most people think, professors actually work incredibly long hours outside the classroom/office, way more than 40 hours a week. Basically, she is a casual pro; someone who needs a computer professionally but for non-demanding use.
Because I knew the new Macbook Pro refreshes were coming, I told her to hold off on buying her computer. During the keynote, she was asking what the price would be and I quite confidently told her it would be the same as before and we’d just get the base 15” model. They had announced that it was smaller, thinner, and lighter, so it shouldn’t be too much for her to carry along with her books to school.
MacBook Pro (15-inch, Late 2016) $2,399.00
What the hell? Let’s just go with the cheapest one.
MacBook Pro (13-inch, Early 2015) $1,299.00
So basically they are selling last year’s model with tiny spec-bumped Haswell processors(2.6Ghz to 2.7Ghz, Iris 5100 to 6100) at the same price?
Wait, this is basically a Macbook Air with no Magsafe, no Thunderbolt 2, only one Thunderbolt 3 port after power, and $500 more for Retina? If we’re spending that much money, fine, we’ll get the stupid Touch Bar. Hopefully it will be more future-proof.
Her office is in an older building with cinder blocks for walls, which creates complete dead-zones for any wifi penetration. The IT department provides a wired connection to each office.
1 x USB-C to Gigabit Ethernet Dongle $34.95
She presents slides in class, connecting to projectors in the classroom that require a VGA cable. She also uses a Logitech wireless presenter which comes with it’s own USB-A wireless receiver.
1 x USB-C VGA Multiport Adapter $69.00
The IT department specially provided her with a Dell monitor(U2417H) that has DisplayPort, Mini DisplayPort, and HDMI input. Perfect for Macs, right?
1 x USB-C Digital AV Multiport Adapter $69.00
The Multiport’s single USB-A connector will usually be enough for her Time Machine drive and her external drive, but most likely she’ll need a USB hub because we all know Thunderbolt hubs and accessories never come cheap. Thankfully she already has Apple’s wireless keyboard and mouse.
Chuq Von Rospach (@chuq, chuqui.com) wrote about Apple’s controversy but rationalized Apple’s decision to remove ports with the following passage:
My laptop has a power port, an SD card port, 3 Thunderbolt ports and two USB ports. I know that in the four years I’ve owned it, I’ve never used the SD card, I use the Power port, one Thunderbolt port, and occasionally plug a USB cable in. So half the ports in this thing are never used — and yet I paid for them because they were built into the computer.
That’s the issue that defines dongles: Should 100% of buyers pay for a feature when only 5% of the owners will use it?
I wonder where all that port-saving money went. Certainly not in our pockets.
Even as a double income family, that amount of money is definitely not something we can easily come up with. The only reason we were able to buy it was because it was for work and paid for by work. Our personal computers? Forget about it. We’ll make do with our iPhones, an iPad, and a cheap upgradable Dell for gaming and Plex. And I think that’s what Apple is betting on. Tim Cook was quoted as saying the following:
I think if you’re looking at a PC, why would you buy a PC anymore? No really, why would you buy one?
John Gruber wrote that many were confusing “PC” with any personal computer and pointed out that Tim Cook was probably talking about a non-Mac computer. In my view, the “personal” in PC hits me much harder. I am unwilling to pay $2,000 for a non-work “personal” computer, and a lot of the things I do, I can increasingly do more and more of on my iPhone and iPad. Apple constantly pushes their iOS devices as tools for creation, not just consumption, and if I were actually into any of that stuff as a hobby, I’m sure I would make do with what I have as well.
From that point of view, I think the Mac Mini is done. Very niche for what Apple has made it to be, and even more squeezed by cloud services and iOS devices, eating at the Mac ASP, there really is no reason for it to exist. I guess they might as well continue the non-support support, updates every few years, but not much else.
For all the noise about Apple not caring about the Mac Pro, I still think they are dedicated to it. The trash can Mac Pro was obviously a mistake, and instead of leading people on, they promptly and unapologetically abandoned it. I’m expecting the new Mac Pro to be completely different, better poised for the future of VR and ever accelerating advanced graphics processing needs.
Microsoft might have made a big splash with the Surface Studio, but I think Apple’s Touch Bar verifies that Apple wants the iPad Pro to serve that market. Unless Apple completely revamps their macOS UI, I think the iMac is exactly where Apple needs it to be.
I betted against Apple’s desire to own the entire stack and I was wrong. While I there are many things to say about the Watch, most of them have already been said. Here are my thoughts, which started as a comment on a blog, like most of my other blog posts.
The wrist is the perfect place to put glance-able information. The watch industry has known this for more than a century. The Apple Watch is wanting a piece of that action. Not specifically the watch industry and its customers, but a place on your own wrist, which is finite. This is similar to how Youtube and Facebook are competing with TV, not directly in the same market, but by competing over people’s time, which is finite and mostly zero-sum. The Apple Watch may not be the perfect timepiece, but it augments that with innumerable features provided by software. Like the iPhone that masked itself as a phone but turned out to be a pocketable personal computer with a phone feature, the Apple Watch has masked itself as a watch, but will turn out to be a wearable personal computer with a watch feature. The benefits of having such a personal computing device will gradually earn more and more wrists and time on those wrists. Traditional watches can be swapped on a whim, but the major benefits of a smart watch can only be got with continuous use. Certain people may be ok with two watches, one on each wrist, but it would seem that a majority of the target market will eventually only use one. Which watch will they use? THAT is what watch makers should be worried about.
TL;DR The Apple Watch will not compete directly with the watch industry and their customers, it will compete for time on your wrist.
Based on all the chatter about smart watches, the new release from Google, and the insight of people smarter than I, I’ve come to think that a single “Smart Watch” product on its own simply doesn’t make sense. Watches are a fashion accessory foremost, more intimate than a phone. There is no single design that can possibly be universally used.
What if the Smart Watch was not a product, but a modular hardware specification and communication protocol. It would be tied to your smartphone, which acts as the main hub for communications. Depending on what hardware modules are included in the smart watch, functionality could be as simple as vibration for notifications, to voice commands and touch screens. One functionality that makes sense in every watch would be a tier of identification. But since the architecture is completely modular, this could easily be implemented in any kind of wearable, including glasses, shoes, etc. It would be Smart Wearables.
Of course a reference model could be made to demonstrate the full potential of the spec, but it would remain more of a niche product. The benefit would lie, not in the economical benefit of the product itself, but how it would strengthen the tie to the smart phone.
A social network for midsize to large organizations to keep tabs on what’s going on. Email is unstructured, exclusive, and demands action. Intranets require administration, taking away valuable staff time. There has to be a better way.
Chatter. A new way to share and keep tabs on what’s going on. Write as much as you want. The first ‘X’ characters or newline will be the title, anything after that will require a click to see. Add a hashtag (#public is default) and off you go.
Hashtags with recent activity will appear in an ‘Active’ list. Tap to quickly view related posts and either check to keep tabs or swipe away to ignore.
Organize your tabs using cards/screens and swipe between them. Tap a hashtag to see all posts or tap a post to see more details.
DMs, user tagging, and group-only posts need more thought. It might not be in the best interest of Chatter to allow private conversation as it will diminish public chatter. Inclusive, not exclusive, but within the bounds of a trusted organization. Maybe a suggest function to suggest a user to check out a hashtag they might have ignored? A Psst? (too hard to pronounce)
Edit: This post was random notes I jotted down while at a PM retreat. Very soon after, I learned about a fairly new product in market called Slack.
“But Mostly Sunny”, the weather app with attitude.
Today’s forecast for Holland, MI:
Mostly cloudy and cold and miserable.
Sunrise: “If a tree falls in a forest…”
The announcement of the Samsung Gear, Samsung’s attempt to one-up Apple in the innovation area IMHO, brought up some interesting questions about the design of a smartwatch. Most notably, what is a smart watch hired to do? What are its limitations and how can it be used best to take advantage of its differences to a smartphone?
To me, the Galaxy Gear seems to me like a typical feature-rich device. The 1.3MP camera is what stood out the most. Is there really a need for an even crappier camera than the one you have on your phone? It’s faced outwards from your wrist, meaning it’s not even for video calls a la Inspector Gadget.
I could go on about my thoughts on the Galaxy Gear, what Samsung should do, what Apple should do, but those are my opinions and not worth much salt. So here’s what I personally want my watch to do. The list is pretty intense so get ready for it.
- Tell me when.
This could be many things. Tell me the time and date. Tell me when I get a call, a message, an @ mention. Tell me when it’s going to rain. Tell me what I’m listening to. Tell me when I should start mosey-ing over to my next meeting. Notifications, short and sweet. Keep individual apps to a minimum. I don’t need an entire RSS reader on my watch. I’ll do that on my phone/tablet/computer. Keep out the clutter, and only show me what matters to me most, when I need to know about it.
- Act on them.
Let me act on notifications quickly. Input methods would be simple taps and gestures, or voice control. An acknowledge or a remind me later. A quick reply. The Mailbox app’s gestures are really handy for simple action items like archive, delete, or remind me later. That would be perfect for a small touchscreen. Skip to the next song or podcast.
That’s it. The rest are bells and whistles.
* Update: I just noticed that this post was exactly a year from my previous post regarding critics urging Samsung to be a first-mover. How fitting.
Original Article: Korean critics call for Samsung to ‘reinvent itself as a first-mover’ in wake of US verdict @ The Verge
I own both as well… there’s innovation and then there’s editing. Android does a lot of editing (bigger screens, bigger chips, more fine tuning options) but they all don’t always seem to mesh well in the end. I don’t think Samsung has it in them to innovate in the smartphone division. Apple’s iPhone was a big leap innovation in comparison to the mass market user interface experience on smartphones prior to it, Android stole their idea & edited on top of it to give a more open (yet clunky) alternative. They’ve refined a great deal, as have Apple.. with a bunch of edits. The only people I see truly innovating at the moment is Microsoft & I’m certainly not a Microsoft fan. I really don’t ever see Samsung taking that kind of initiative to truly bring something different to the mass market. So copy and edit, they shall continue to do. (http://www.theverge.com/2012/9/3/3289795/korean-critics-samsung-first-mover-us-trial#113520149)
There’s a big difference between editing and innovation. I like all of the edits Android and iOS have brought but there is a very clear and genuine difference between the UI experience on smartphone’s before and after the iPhone. And the edits Android made existed well before the iPhone in mass market smartphones. Widgets were in smartphones. Multitouch technology did not exist in a mass marketed smartphone devices & it clearly changed the atmosphere and moved tides. I like a larger screen but it’s not innovation it’s a larger screen. High resolution display was also another great addition, innovation? Maybe, maybe not. The only reason I see Microsoft’s universal changes as innovation is because it is a drastic departure from the way things were & may be a sign of how the way things will be. Just look at July and other recently updated sites. They’re beginning a trend. As did Apple with the iPhone. (http://www.theverge.com/2012/9/3/3289795/korean-critics-samsung-first-mover-us-trial#113517787)
A product is not a sum of it’s parts, but how it is put together in a cohesive and intelligent way to solve a problem. Just because a smartphone device uses existing technology in regards to it’s screen, CPU, memory, storage, and battery doesn’t mean the end product is not innovative.
The definition of innovation, provided by Merriam-Webster, is “a new idea, method, or device”. That definition is very broad and can mean pretty much anything. The way I see it, most people are subscribing to this very broad definition of innovation, but only to the device that they prefer. To the other device(s), they use a very narrow definition and very selectively at that.
From what I read, Tuan X used a very narrow definition of innovation, but applied it to both camps. That is why he used the term “edits” to describe something “new” but not “ground-breaking”. He mentions larger displays as an example. When applying the broader definition of “innovation”, yes, 3.5 inches is different from 3.0 and 3.2 and is “new”. But when applying the narrower definition, then no, a 0.3″ inch increase in display size is hardly considered “ground-breaking”. High-DPI may not seem much feature-wise, but requires a lot of technology (screen, GPU, RAM, battery, OS modifications just to name a few) to back it up. I agree with Tuan X that it is debatable whether or not it is truly “ground-breaking”, but Apple certainly thinks it is.
The way I see it, Apple’s iPhone was innovative in the fact that it combined the functions of an iPod, a mobile phone, and an internet communication device, used a capacitive touchscreen as it’s main user input method, minimalized the device front face in order to emphasize the screen, made a colorful grid of icons with a fixed row on the bottom as it’s main UI, integrated swiping and pinch-to-zoom as the main navigation method into the OS, separated each different functions into it’s own full-screen apps so that basically the device becomes the app, and produced it in a single mass-market device. Previous products may have attempted to address bits and pieces of the above, but not a single one addressed all of them in one cohesive device.
Microsoft’s Windows Phone 7 was innovative, not in the hardware specs, but in how it designed the UI. It was still a capacitive touchscreen driven device, but the software UI consisted of tiles of dynamic information with an emphasis on content text over app icons and graphics with a layout that intentionally extends past the screen to depict more information. Again, many products before it might have included some of these features, but not all of them in a single cohesive manner for the UI of a mass-market mobile device. I don’t believe they introduced a new UI concept just because they “can” (although there are strong arguments that support the fact that they “must”, see recent Samsung vs Apple trial results). I believe they truly wanted to solve the problem of the mobile phone UI where it took too much work to get information. They make this point strongly in their advertisements. They wanted information to be “glanceable” so that you can get off your phone and back to real life.
My main point is that if you want discuss “innovation”, use the same definition across both camps and don’t apply it to a single “feature” but to the product overall and how it is used to solve a problem.