Two weeks of App Testing with Consumers spotlighted for me the fact that iPhone & Android users have VERY different relationships with their phones.
The test case is a mobile marketplace that we are building that has separate iPhone and Android apps.
The biggest difference between the two testing groups was that the Android users tended NOT to be mobile-first types, and tended not to be native app types.
From my own experiences building apps and then using the same apps on both platforms, it's a vicious cycle.
On the one hand, development is necessarily focused on an ideal target device with the latest version on Android.
On the other, the phones out in the wild have so many different form factors, and so much variety in terms of what OS is installed, that the experience never feels optimized.
iPhone is the exact opposite of this approach, which is why most iPhones and iPads are running the latest version of iOS.
Let me be clear. As a developer, you **could** optimize for all of these variations of Android, but it would take so much testing and optimization, that it would come at a cost of building new functionality.
No less, the Android end user wouldn't necessarily care about the extra effort in the same way an iOS user does.
Like I said, it's a vicious cycle.
Case in point, in the new app that we are building, one question in user testing was how important having a desktop web version of the functionality would be.
Get this, 90% of the Android users thought it was pretty important, most commonly because the test user saw the PC as the central part of their computing experience -- even though the app is for a highly mobile type of action.
By contrast, 90% of the iPhone users looked cockeyed at the question, noting that the action is designed for palm in the hand, on the go types of behaviors, adding (I'm paraphrasing) that their iPhone is their hub, not the PC.
Same questions. Same product feature for feature; a variety of young to middle age males and females, and the only difference is iPhone versus Android.