Apple is great at making things ‘just work’ which is why when things go wrong with Apple, it’s that much worse. This is just a quick note on a user experience (UX) issue that bothers me about how Apple syncs iDevices. I’ll preface this by saying that since Apple liberated their iDevices from the tether of iTunes, I think they’ve put a lot less focus into improving the iTunes experience. This issue, however, I believe long pre-dates Apple eliminating the need for iTunes.
If your iDevice is full, you need to remove data from the device to make room for new photos, videos, etc. Sometimes you can just offload your photos and videos to make room for more, but sometimes you want clear out more space, so you need to remove apps. There are a few ways to do this:
Hold down an app in the Springboard view (aka the Home Page) until it starts jiggling. Press the ⓧ in the corner of the app icon, and approve it’s deletion. Find other apps to delete and continue the process. Upside, can be fairly quick. Downside, no way to know which apps take up the most room.
Go to Settings > General > Usage and see what is taking up space on your iDevice, sorted by what is taking up the most space. Upside, you know which apps take up the most space so you don’t need to waste time deleting apps that might not make much of a difference. Downside, it can be infuriatingly slow to calculate your usage and display the space used by everything.
Connect your iDevice to your computer, and delete the apps in the Apps panel of the iDevice in iTunes. Downside, does not immediately delete anything. Upside, can quickly and easily sort apps by size, name, kind, category and date – making it very easy to figure out which apps to delete.
It’s the last one I’m focused on here. While not immediate, it is the easiest way to go through your apps and figure out which ones to delete. However, there one really dumb thing about the way this works. Let’s throw out some numbers. Let’s say you only have 50mb of space on your iPhone. You’ve added 200mb of music to sync on your computer. Simple math shows you need 150mb of space on your iPhone for this to work. So you go to the Apps tab and tell it to remove 500mb of apps from your iPhone. Should work right? No, not really. It’s pretty simple, Apple should try deleting things before copying new things to your iDevice. Unfortunately, it doesn’t. Instead it tells you there isn’t enough room on your iDevice to finish syncing.
Why? I have no idea. Syncing isn’t a new concept. Apple has been syncing from the Mac to cell phones since long before the iPhone (remember iSync?). It seems pretty logical to me that if one of your sync steps is to remove data from a device, that that should be done first. Instead, you’re forced to go with option one or two above, just to make room for the sync to happen.
Some of you are probably scratching your heads wondering that the OHA is to being with, and that’s not so surprising. What you might be shocked to know is that, officially, the OHA is the organization that guides the development of the Android operating system.
You can be excused for thinking Android was a product of Google. The OHA hasn’t even bothered to update their own web site since 2011. The last phone manufacturer to join the OHA did so in 2009 (Acer). Even when a major breach of the OHA partnership emerged in 2012 (the launch of a phone by OHA member Acer running an ‘incompatible’ version of Android) it was Google itself which responded, not the OHA. Interestingly, in Google’s response, it does mention the OHA and the responsibilities of its members. So what happened?
The Mobile World in 2007-2008
Back in 2007 when the OHA was launched, we were living in a very different mobile world. The most popular phones, by a far margin, were made by Nokia (and were ‘candybar’ shaped). On January 9th, 2007, Steve Jobs got up on the stage in the Moscone Center in San Francisco and introduced the iPhone to the world. I think some handset manufacturer executives crapped their pants. No really. Crapped. Their. Pants. While many people disconnected from the cell phone world made fun of the iPhone, thought it should have buttons, didn’t think the touch screen would work, etc. I believe that people who made phones recognized that even if they believed all the bad things people were saying about the yet-unrelease iPhone, they understood that the iPhone was disruptive.
Only a few short weeks later the LiMo Foundation was launched by a consortium of handset manufacturers and mobile carriers. These included, to start, Samsung, Panasonic, NEC, NTT Docomo, and Vodafone. While people in the US might not think of NEC and Panasonic as major phone manufacturers, they were at the time major suppliers to the Japanese carrier NTT Docomo.
The Origins of Android
Android was originally a company founded in 2003 by, among others, Andy Rubin, who had previously founded Danger, Inc. (who made the Hiptop/Sidekick phone). Android was acquired by Google in 2005, and as a Google project they were toiling away to make a new operating system for mobile devices. It’s no secret that the January 2007 iPhone announcement was a major kick in the gut to the Android team. They were planning their own mobile phone revolution, and now their goals seem antiquated even before launch. This led to a major delay in the launch of Android, in order to regroup and change its focus to incorporate what they learned from the iPhone. The iPhone launched in June 2007.
It wouldn’t be until November 2007 that the Open Handset Alliance would be launched. Founding members included, in addition to Google, handset manufacturers, mobile carriers, chip developers, and other software companies. The handset companies included Samsung, LG, Motorola, and HTC. Chip companies included Broadcom, Qualcomm, Intel and Marvell. Carriers included NTT Docomo, China Mobile, Sprint Nextel, Telefónica, Telecom Italia, and T-Mobile. It’s interesting to compare that when Apple launched the iPhone it had one carrier partner, AT&T.
The goals of the OHA were to simplify the production of phones (by standardizing the software as well as the hardware), but also to simplify the work of mobile software developers. Google knew only too well how hard it was to develop software for mobile phones, where every model would need custom development and the carriers were the gatekeepers of what ended up on their phones. Android, and the OHA, sought to change the development cycle for mobile software.
In February 2008 I was present at the Mobile World Congress in Barcelona, probably the first major industry conference since both LiMo and OHA had formed. I remember thinking at the time that while LiMo was already showing phone models, that the OHA, and Android, had a lot more buzz at the conference. For an interesting perspective, see Who Will Control the Heart of Handsets? from Businessweek in 2008. Undoubtedly, Google spent a lot at the conference to build buzz. They needed buzz, because their first phones were still delayed, not to be launched until the end of the year.
The Samsung Factor
One interesting aspect to all of this, is Samsung. Notice that they were members of both LiMo and the OHA. They were also part owners of the dominant mobile operating system at the time, Symbian. People associate Symbian with Nokia (and indeed Nokia bought out all of Symbian just months after that conference, and turned it over to the Symbian Foundation as an open-source project), but at that time it had several shareholders, including Nokia, Motorola, Ericsson and Samsung. Samsung was shooting in all directions to become a major player in the mobile phone business.
So what happened to the Open Handset Alliance and their supervision of the development of Android? It seems, and I’m open to being shown I’m wrong here, that the OHA was nothing more than a PR stunt orchestrated by Google to make it seem like Android was a standard, when in fact it was a Google product. Sure, it probably helped to get so many companies in the same room, and connected through the OHA, to coordinate on any number of things. Some credit the OHA with helping to standardize the use of Micro USB for charging mobile phones. In the end, however, the OHA was a marketing tool for Android.
Google’s Reassertion of Dominance
Once Android took off, Google reasserted its public dominance in the development of Android. Google led the development of the flagship Nexus phones, even while letting other companies manufacture them, starting in early 2010. In August 2011, Google purchased Motorola, bringing its own hardware manufacturing in-house. Is it coincidence that the last update from the Open Handset Alliance came out just a month earlier in July 2011? I’m not a big believer in coincidences. Buying Motorola was perhaps the final nail in the coffin of the OHA as anything other than a tool for Google to enforce rules on Android licensees.
While Google sold Motorola to Lenovo earlier this year, it kept Motorola’s patent portfolio of somewhere north of 17,000 patents. While those patents can be used to ‘protect Android’ from other companies (such as the members of the Rockstar Consortium that bought Nortel’s patents shortly before Google bought Motorola) they can also be used as a stick to keep OHA members in line (such as when Google got Acer to cancel the launch of a Aliyun OS based phone in China in 2012). Was the Acer strong-arming to prevent fragmentation of Android, or to hurt the emergence of Ayilun as a serious competitor to Android? You be the judge.
How and why WhatsApp grew at the rate it did in the years it has been in existence is a topic of much discussion. Most people attribute it to the fact that WhatsApp got rid of the concept of a ‘buddy list’ and just used the person’s address book in their phone to connect them to other WhatsApp users. I feel like the side-effects of this system haven’t been discussed in detail.
Multiple SIM Cards
One topic I have seen discussed is the problem people run into when they use multiple SIM cards, such as switching cards when traveling. I recall traveling last year and was surprised when I loaded a different SIM that WhatsApp recognized the phone was operating on a new number, and asked if I wanted to switch to the new number. At the time I didn’t realize the significance of the switch. When you change numbers, anyone who has your other number in their address book becomes disconnected from you (unless they also have the new number).
Why is it that a single WhatsApp account can’t be connected to more than one phone number/device? It seems a silly limitation. Changing the number from the WhatsApp perspective seems to be based on the consideration that a person might change their phone number. With number portability, it seems to me more likely that people switch SIMs to get service in different places, such as cheaper service in other countries.
In the developing world it’s very common for people to carry more than one SIM card with them, and some phones even support multiple SIM cards. Even Samsung makes dual-SIM models of its Galaxy S phone line, although those phones are not available (directly) outside of Asia.
So what does that have to do with dead people? My current address book on my iPhone has entries that date back to my Pilot 5000 (that was a first generation Palm device released in 1996). That address book was eventually synced with my Mac Address Book, and then to my iPhone. The oldest entries in my address book don’t have cell numbers in them, as in 1996 not everyone had one. Among the people whose cell numbers I have, if they changed their numbers before number portability became common, and then later joined WhatsApp with their new numbers, obviously I wouldn’t know they were on WhatsApp. However, consider what happens when someone in your address book passes away. Unfortunately, this has happened. After enough time this happens to everyone. What happens to that person’s phone numbers? Generally they’re released back into the pool of available numbers and assigned to new subscribers. What happens when those people sign up to WhatsApp and register their phones with their new number? That’s right. Dead people from your address book show up in WhatsApp. I’ve had this happen a couple of times that I’ve noticed.
Flipping Through My Contacts
As I was writing the above I flipped through my WhatsApp address book. What I found was interesting. One dead person. Several people with multiple listings, where one was clearly their current number and others their older numbers. Sometimes I have more than one cell number for people who had more than one phone and then they dropped others, without removing the ones that were dropped. I noticed one of my friends with one account which appeared to be him, and another with a status written in Arabic. While it’s possible my friend learned Arabic, I’m guessing it’s more likely he got rid of one phone and that someone else who speaks Arabic now has that number. Lots of people I am no longer in touch with, or at least not in touch with via phone (Hi Facebook Friends), show up with WhatsApp accounts but are clearly not them either. Other than the status message, one can tell by the profile image as well. When I have a male friend and a female photo shows up, it’s a good chance it’s not the right person.
Another sub-category of incorrect listings in WhatsApp are travel numbers. I live in Israel and when people come here from the US to visit they frequently rent a phone or SIM for their visit. I add their temporary number to their address book entry in my phone, and rarely remember to remove them after they leave. All those people who have visited from abroad over the years, whose phone number have over the years ended up on other people’s phones here in Israel, show up among my WhatsApp contacts.
I even show up in my WhatsApp contacts, and can message myself. That’s pretty funny.
These problems are not unique to WhatsApp. I’ve seen them with Viber as well. I would think all of the mobile-first messaging apps that have linked to the phone’s address book have similar problems. These are not insurmountable problems, and I’m sure they will be resolved in the future. Hopefully one day I won’t have to worry about dead people showing up in my messaging apps.
First of all, if what’s written above is your password, you need to change it now. I’ll wait. Okay, good, now for the rest of the article.
Why Passwords Don’t Work
It’s not much of a secret that passwords are not a very good way to secure information. The real problem is when companies try to make users utilize more secure passwords, they end up making the whole system less secure. Does that seem counterintuitive? Here’s a scenario. A company wants to make their corporate systems more secure. They decide that the passwords their employees are using are not secure enough, so they institute rules for passwords, which include:
Must be 8 characters or longer
Must include a lowercase letter
Must include an uppercase letter
Must include a number
Must include a non-letter/number character
Must not be the same as the previous password used
Must not be the same as the username, or contain the username
You’ve probably run across these rules before. You may not have seen all of them, but you’ve probably seen most of them, and probably many of them with a single system. In theory, these are all good rules. Where they lead to a less secure system is that most people can’t remember a password that meets all those requirements. Did I make the first letter uppercase? or the last? Did I replace the O with a zero, or the A with an @? or both? Since some sites have different requirements, you end up with different passwords.
Take a look at Apple’s requirements for selecting a password for an Apple ID (which is used for everything from the iTunes Store to their iCloud e-mail accounts, etc.):
Originally the only requirement most sites had was that you had to have 8 characters. People generally can’t remember random 8 character passwords, so they use words they can remember, perhaps with some modifications. Introduce a requirement like a number and people need to change what they’ve been using. Perhaps one site has a number requirement and another does not, human nature leads one to use the same password with the only difference being the number. Now add in all the other requirements, and all of the sudden people are using many variations of their original password. When different sites have different requirements, people start getting confused and need to send themselves password reminders on sites they don’t use often.
Of course, a user should be using the most secure password they can, but the reality is simply that people use whatever is easiest to remember. If they can’t remember their password, they write it down. Or put it in their cell phone address book. Or keep a file on their computer listing all their passwords. The fact that someone has now put their secure password in an insecure location completely destroys the whole security system. Now instead of having a less secure password that the person could remember, you have a more secure password that is written down under the user’s keyboard at their desk, on a piece of paper in their wallet, or sitting in a text file on their computer.
Another simpler look at this problem is PIN codes. When I lived in the US and opened bank accounts there, the bank teller would always let me enter a PIN code into a number pad so I could choose my PIN code without having to tell it to the bank teller. In Israel, my experience has been that I haven’t been able to choose a PIN code, and have instead been given a printout (using a special envelope that allows a PIN to be printed without being seen) where I need to use the PIN code that was assigned by the bank. So what’s more secure, the PIN I chose using the number pad, or the one assigned randomly by the bank? You might think the randomly assigned one, since it can’t be guessed using knowledge about me. Imagine how many people use their birthday as their PIN code (which by the way, if you do, you should change your PIN).
So the randomly assigned is more secure, right? Well, no. Since people have no reference to remember a random number, they tend to write it down or put it in their phone somewhere. You might think you can easily remember a four digit number, but what if you have multiple accounts? All with random PIN codes. I would say, therefore, that as long as you don’t choose an easily determined PIN code, being able to choose one is probably more secure.
The long-known solution to these problems is that there needs to be second piece of information in addition to your password which needs to be given in addition to your password, this second piece of information is what gives the name two-factor authentication its name. Bank machines have always had two-factor authentication – you need a physical bank card and you need to known your PIN.
With online two-factor authentication, what the second piece of information is gets complicated.
Hardware Code Generators
One of the first practical solutions to two-factor authentication was to introduce hardware code generators. If you’ve worked in high-security locations like financial institutions, military contractors, government offices, etc. it’s likely you’ve seen some form of code generator.
The RSA SecureID token is one of the more common physical code generators, and has been around for just less than 20 years as far as I can remember. A small dongle intended to fit on your keyring, it generates a numeric code that changes based on the time. At any given time, when logging in to a secure service, you would enter your password and the number given on the screen at that moment. The hardware token is tamper-proof, meaning that if you try to open it up to examine it, it would break and not work anymore. The great thing about these kind of code generators is that there is no need to be connected to a network, they just work based on the current time.
The fact that the SecurID tokens and their like were tamper-proof is evidence of one of their vulnerabilities – they are based on a secret which if known makes them completely insecure. This became evident back in 2011 when RSA itself was hacked, leading to tens of millions of SecurID tokens having to be replaced. Lockheed Martin, the military contractor responsible for some of the US military’s most important defense systems like the F-35 fighter jet, Trident missiles, satellites, etc. was hackedshortly after the RSA hack, before the compromised tokens could be replaced (and perhaps before the extent of the RSA breach was known, or at least known to Lockheed Martin).
There are other hardware solutions besides stand-alone code generating tokens. One interesting example is the YubiKey. It is not that different from a security point of view as the SecurID token. The difference is that it has no screen, no battery, and doesn’t work by itself. Instead, it plugs into your computer using USB (or to your mobile device using NFC with one model) and using software on your computer and servers online, generates the unique password used for authentication. Some are special made for specific services, and some are more general-purpose. Some can even be configured to output a static password as if it was a USB keyboard. A good summary of the technical details of the Yubikey can be found on their web site. The big advantage of the YubiKey is it’s small, has no maintenance needed (no battery), and it’s cheap ($25 for the basic model).
Software Code Generators
While hardware tokens have been around for a long time, nowadays when so many people carry around smartphones, it has been possible to create software-based code generators. RSA in fact offers SecurID software apps for most major mobile operating systems, including iOS, Android, Blackberry, Blackberry 10, Windows Mobile, Windows Phone, Symbian, etc. One of the advances available by having an always connected smartphone, however, is that there are now many more options available for software code generation. Indeed, many services provide their own software code generators within their smartphone apps.
As an example, Facebook’s mobile app has a code generator built in, which you can use with their two-factor authentication system which they call Login Approvals. I have a friend who enabled Facebook’s two-factor authentication last year, then got locked out of his account for months. Now when you sign up, they give you a week to shut it off just in case you end up locked out. They also let you print out up to ten codes you can use when you don’t have your phone. I guess you put that piece of paper in your wallet? It seems there’s a pattern here.
Two-factor authentication itself is not a panacea. It needs to exist within a larger framework of security that needs to be well though out. Dropbox, which offers two-factor authentication using one-time codes sent via SMS or via their mobile app, had their two-factor system completely bypassed by hackers who used a fake e-mail address and pretended they had lost their phones. It’s quite clever. Luckily for Dropbox, they contacted the company before publishing their exploit so it could be fixed. Not all hackers are so generous, however.
Biometrics to the rescue?
Some people believe that biometrics will be what replaces the use of passwords. People have believed that for decades. There are reasons it hasn’t happened yet, and reasons it’s unlikely to happen any time soon.
Biometrics is the use of unique physical body characteristics to verify your identity. The most well known biometric type in use is the fingerprint. Other biometric types include iris, retina, face, hand geometry, ear shape, gait, odor, speaker recognition, writing recognition, typing rhythm, etc. Not all of these are commercialized, but some like iris recognition and hand geometry are widespread. What is common to almost all of these biometric types, other than fingerprints, is that the hardware required to capture the biometric data is much too big to be used in a mobile device, too expensive, or too clunky from a user experience point-of-view. Some progress is occurring in allowing face recognition via the front-facing cameras in mobile phones, and possibly iris scanning, but things like odor and gait, ear shape, writing recognition, etc. are not coming to mobile any time soon. Without mobile, these technologies are really irrelevant in terms of password replacement.
There’s a reason fingerprints can’t really be used to replace a password, and I’ll get to that, but first let’s take a look at what you might think is fingerprints already in use for this purpose.
Touch ID and iCloud Keychain
You might be thinking to yourself that the iPhone 5S has Touch ID, so therefore fingerprint biometrics have made it into the mainstream of mobile authentication. Well, no. Touch ID is innovative in a lot of ways, and it can replace entering a pin to access your phone, but by itself it does not replace passwords (it does allow you to buy things from the iTunes Store and other Apple ID-connected stores, but that’s because it’s Apple and they themselves confirmed the phone is yours). Apple knows this, which is why it has been deploying iCloud Keychain into an increasing number of countries (over 100 countries now). You might not have even noticed iCloud Keychain, which was introduced with iOS 7.0.3 and OS X 10.9 just this past fall.
iCloud Keychain lets you sync passwords (as well as credit card details and Internet account info) via iCloud between your Apple devices such as your Mac, iPhone and/or iPad. When you go to login to a web site using Safari on your Mac or iOS device, it will ask you if you want to sync the password using iCloud Keychain. If you are registering for a new account on a web site, it will recommend a randomly generated password, which you will not need to remember (eliminating the paper taped under your keyboard problem), since it is synced between all your devices. You can of course choose your own password instead. Either way, if you want to save it to iCloud Keychain, it then becomes available for auto-fill on all of your devices.
What does iCloud Keychain have to do with biometrics and two-factor authentication? Let’s look at how you sign up for iCloud Keychain (and why it needed to be rolled out in specific countries). You turn on the feature, and authenticate using a mobile phone. This is not dissimilar to how the mobile messaging apps like WhatsApp verify your phone belongs to you (one of WhatsApp’s biggest expenses by the way). Once your phone is verified, it can be used as a verification device for your non-mobile devices such as your Mac (and I define mobile here as cellular-connected). Now that your mobile device is authenticated, all of your passwords stored in your iCloud Keychain are essentially secured with Touch ID (or a PIN code if you do not use it). Strictly speaking, this is not two-factor authentication. The web site you’re connecting to using the username and password stored in iCloud Keychain has no idea that in order to enter that password you had to authenticate your phone via SMS, and then access it using your fingerprint. If your password is found by someone, by hacking or otherwise, the fact that your use a fingerprint scanner on your iPhone does not effect that fact that they can access the web site without your phone.
The problem with fingerprints
In general, fingerprints are a great way to verify your identity. There are some people that have unreadable fingerprints (for a variety of reasons) but they are small in number. Fingerprints can also be faked – you might remember that Touch ID itself was circumvented with a fake fingerprint just two days after being released. Two days! Those issues aside, fingerprint biometrics is a fairly well researched technology, and the person who would want to fake your fingerprint would need both a copy of your fingerprint and access to your phone.
The big problem with fingerprints, like all biometric traits, is that once they are compromised there is no going back. If your fingerprint is copied, that’s it, end of game. Sure some fingerprint scanners try to scan under the skin to prove the fingerprint is on a living person, etc. but Touch ID also claimed that capability, and that turned out to be false.
In order to prevent losing the ability to use one of your biometric traits, a considerable amount of research has gone into developing a way to mix the benefits of biometrics and cryptography. This research has led to techniques allowing you to create a password that is based on your biometric trait, but cannot be reversed to reveal your biometric trait. Additionally, you can generate an unlimited number of these passwords, allowing you to change your biometric-based password just like you would change your regular password. This is called ‘revocable biometrics’ and it uses a variety of techniques such as fuzzy extractors. It’s a complicated area, but one thing which has been found in the extensive academic research is that a single fingerprint doesn’t contain enough data to create a secure revocable password. In other words, you will never be able to use a single fingerprint to create a secure password that cannot be hacked (at least with current mathematical and biometric understanding).
So where does that leave us?
At the moment, I’d say we’re pretty much in the same place we were 20 years ago as far as password security. That’s not to say there hasn’t been progress. It is cheaper to implement two-factor security. Dedicated hardware is no longer required. Two-factor security is available to almost any company, although foolproof implementation remains difficult, as evidenced by the RSA and Dropbox hacking events.
That said, there do seem to be some interesting products on the horizon. A company named Nymi has come out with a bracelet that uses your heartbeat to authenticate your identity. That’s an interesting trick, because authenticating via a wristband means that the technology can be integrated into any number of fitness bands and smart watches that people are already going to be wearing. I don’t think the concept of a stand-alone band like the Nymi has a long term success possibility, but the concept is interesting as a feature of other products. Not too surprisingly, it turns out Apple has several patents on a very similar feature going back to at least 2009. I tweeted a few weeks ago that if Nymi had good IP, they’d be an obvious purchase for Apple, but considering Apple’s patent portfolio maybe there’s no need to buy them. Even if their IP is weak and Apple could sue them into oblivion, if their product is further along than whatever Apple has developed, it’s still possible Apple could buy them, but they’ll have a much worse negotiating stance.
A company called EyeLock introduced a hockey-puck sized Iris sensor at CES earlier this year, called the myris. It connects via USB to your computer and lets you authenticate using your Iris, raising the false-positive rate from the 1 in 50K of a fingerprint to somewhere above 1 in 1 and a 1/2 trillion. Their next goal is to integrate this into the body of laptops and monitors, and eventually mobile devices. They claim to be able to detect that the eyes belong to a real living person (and not a photo, or on the end of a pen if you remember Demolition Man) which if true would be important for such technology. Certainly many biometric systems have been circumvented. It seems this technology is far from being usable in a mobile device, however, so this is years away from being practical.
At the Mobile World Congress (MWC) in Barcelona last week, a Chinese company called YunTab was pushing it’s $152 YunTab S5 smartphone that had one unique feature – 3D facial recognition for unlocking the phone. The phone uses two infrared emitters and an extra infrared camera to create a 3D image of your face for authentication. It’s an interesting implementation of facial recognition, considering that other implementations have circumvented by pointing the camera at a photo or video. I don’t think you’ll see this phone taking over the market (it’s only available in China right now) although it’s not a bad deal for an Android phone with okay specifications.
This is an area Apple also has extensive intellectual property. Apple bought Swedish facial recognition company Polar Rose in 2010. Apple also bought Israeli 3D sensor company PrimeSense (the company behind the original Xbox Kinect motion sensors) in 2013. Putting aside motion sensing, the technology is very similar to what the Chinese company is using to build a 3D model, and in fact PrimeSense had filed a patent on enhanced facial detection using depth before Apple purchased them. PrimeSense isn’t the only Apple acquisition with very similar patents. Authentec, which Apple purchased in 2012 and who is better known for the technology behind Touch ID, also had patent applications related to 3D facial recognition. Lastly, Apple has patents of its own in facial recognition, including using 3D information. I bring up the Apple connection again because I think the issue of authentication using biometrics is important, and in the end you will have a device with you that will be verified as yours (a la iCloud Keychain) and that you will be using biometrics to secure it. Apple is likely to be one of the companies making the devices you will use for this purpose.
Where are we heading? We’re heading beyond two-factor authentication to, you guessed it, three-factor authentication. Those factors are what-you-know (a password), what-you-have (your device), and who-you-are (a biometric sample). We need all of these factors, because without any one of them, the others have a much higher failure rate. The key is making them simple to use. Making them simple to use means integrating everything into a single device that you always have with you. Whatever that device, it needs to be able to be authenticated as yours (such as via SMS), needs to be able to securely store your biometric hash, and needs to be able to read your biometric signature. Right now the only device that fits that description is a smartphone, and just barely. I believe much of these functions will be in fact pushed to a wearable device. One reason it needs to be wearable is for security reasons – you’re less likely to lose your watch than your phone. Wearables are certainly on the rise now, and I think you will see many more of them integrating security functionality into them (like the Nymi). What else will wearables be integrating? That’s in my next article…
No one will come away from this video amazed at how humble Stephen Wolfram is, but that’s not the point of the video. It’s an introduction to a forthcoming programming language from Stephen Wolfram, named appropriately enough Wolfram Language, that attempts to build on the past 30 years of his work creating Mathematica, his book A New Kind of Science (humbly referred to on his site as Wolfram Science), and Wolfram|Alpha. It takes the knowledge and algorithms built in to Wolfram|Alpha and makes them available in a symbolic programming language. The demo is fairly entertaining (considering its topic) and it should be very interesting to see what is done with this language once it’s available to the general public.
For more information, see the Wolfram Language section of the Wolfram Research web site.