Apple Pay is Useful for Stealing Money. Here is the Proof
Written by Thomas Fox-Brewster
Apple AAPL -0.57% Pay has numerous benefits when it comes to ease of use and security. It brings payments under the manifold protections of iOS, whether that’s Apple’s much-debated encryption or largely successful repellence of malware.
But, according to security researchers from anti-fraud firm Pindrop, there’s one area where Apple Pay and its banking partners don’t go far enough to stop criminals: preventing stolen credit card details from being uploaded and accounts subsequently drained. Criminals who acquire pilfered credit card data, purchasable from so-called “carding” sites for as little as $2, can add them to an Apple Pay account. This means they don’t have to clone the card — a method of fraud that’s become much more difficult with the advent of EMV chips, which hold authenticating information and are far more difficult to copy than easily-replicated magstripes.
But for fraudsters, it’s not so simple that they can upload any card to Apple Pay. The level of security differs for each credit card provider. Once a customer attempts to add a card, Apple connects with the relevant bank, sending them encrypted credit card data as well as some information about the customer’s iTunes use and other data about their phone. The bank then imposes its own authentication checks. Sometimes, this will require a phone call during which the user has to provide more information, or they will be sent a verification code by email or text.
To find out how far banks and credit card providers went to get extra verification from users, Pindrop researcher David Dewey asked his co-workers to donate cards to be uploaded on his own Apple Pay account. Dewey, acting as the “bad guy”, ended up testing cards from four different banks, though he isn’t revealing which ones. Doing so, he effectively proved controversial claims made by mobile payments consultant Cherian Abraham a year ago that Apple Pay could be used for such fraudulent purposes. Though it’s unclear if it’s as “rampant” as Abraham suggested, it’s also doubtful banks have expended additional effort to prevent stolen credit card use on the platform since early 2015.
With one provider, all Dewey needed was the credit card number, CVV and expiration date – i.e. what was provided on the card and the kind of data frequently traded on criminal web forums. In another case, it appeared the provider would only carry out an extra check if the name on the Apple Pay account was not the same as the cardholder. A simple switch to a new name and no additional checks were made.
Even where he was forced to call, the Pindrop researcher, who will be revealing his findings at the RSA Conference in San Francisco today, was able to pass security questions with information gleaned from simple Google GOOGL +1.51% searches for his colleagues.
FORBES contacted a host of major banks who work with Apple Pay (the list is now extensive). Two replied, but their responses weren’t exactly clear. “We do authenticate to protect customers against fraud, but we don’t disclose any more detail than that publicly for security reasons,” said a spokesperson from Chase bank.
Wells Fargo WFC +0.12% said customers “may” be asked to provide additional verification, but couldn’t say when it determined that extra step was necessary. American Express AXP +0.92% didn’t respond to a request for comment but it used similarly vague language in its Apple Pay FAQs: “For security purposes, we may ask you to enter a one-time Verification Code to confirm your identity.”
Bank of America has a similar process, but it’s unclear if it’s always enforced. A spokesperson said: “We use a variety of methods to verify cards when loaded, but we don’t go into specifics for reasons of security.”
Capital One gave a vague response too, a spokesperson commenting: “Capital One uses a variety of authentication methods that depend on the channel being used and risk presented. These include, but are not limited to, biometrics, secure one-time-PIN, and traditional username/password.
“Of course we continue to invest in steps to prevent fraud, including monitoring, new authentication methods and other controls.”
Fraudsters often get around these codes by social engineering banks anyway, said Dewey, noting it was “very common” for an attacker to call and change email or phone contact details before adding a stolen card.
According to Dewey, Apple Pay is currently the easiest way around the protections offered by EMV chips. “If you want a quick and dirty way, this is it,” he added. “Fraudsters and hackers are like water: they’re going to take the easiest path to get what they want. Right now, this is that easiest path… There’s no point of even trying to find a vulnerability in EMV because this works so well.”
Given Wells Fargo and Bank of America are planning on letting users draw funds from ATMs using Apple Pay, the situation could become even more severe if extra precautions aren’t enforced.
Apple mistakes?
Though banks’ protections were patchy, Dewey believes Apple is not entirely blameless either. He claimed that competitors Samsung Pay and Google’s Android Pay go a little further to protect users from such fraud. “We tried the exact same cards between the others, and security issues that arose in Apple Pay were not present in Samsung [or Google],” he added.
In particular, Apple does not implement what’s known as “rate limiting” in the service, Dewey claimed. This stops hackers making as many guesses as they need at certain data they might be missing. Often, cheaper carding deals see just the card number and expiration date provided, with the crucial CVV number missing.
Apple had not provided comment at the time of publication. But FORBES understands whilst Apple doesn’t prevent brute force attempts via Apple Pay, it does flag such attacks with banks when they occur. It’s then for the bank to take action, which is in line with the Cupertino giant’s policy on leaving much of the fraud prevention to the financial bodies.
But Dewey created a tool that would quickly guess the correct CVV number of a credit card being uploaded to Apple Pay. As he noted, there are only 1000 different combinations of three digits, something a computer can run through in seconds.
Dewey noted that communication goes through Apple Pay blindly and with some providers there’s no protection for such “brute force” guessing of such information. Rather than rely on banks, Google and Samsung prevent these attacks in their respective apps, he said. “[Google and Samsung] have been much more diligent about preventing automated brute force attacks.”
“If you know how to do this, you can run very quickly through a set of cards that you have,” added Vijay Balasubramaniyan, CEO and co-founder of Pindrop, which recently secured a $75 million Series C in an investment round led by Google Capital. “Apple Pay is standing on the shoulders of so many different security systems …there’s not one fundamental flaw, it’s just bad security design.”
Source: Forbes Tech
Apple AAPL -0.57% Pay has numerous benefits when it comes to ease of use and security. It brings payments under the manifold protections of iOS, whether that’s Apple’s much-debated encryption or largely successful repellence of malware.
But, according to security researchers from anti-fraud firm Pindrop, there’s one area where Apple Pay and its banking partners don’t go far enough to stop criminals: preventing stolen credit card details from being uploaded and accounts subsequently drained. Criminals who acquire pilfered credit card data, purchasable from so-called “carding” sites for as little as $2, can add them to an Apple Pay account. This means they don’t have to clone the card — a method of fraud that’s become much more difficult with the advent of EMV chips, which hold authenticating information and are far more difficult to copy than easily-replicated magstripes.
But for fraudsters, it’s not so simple that they can upload any card to Apple Pay. The level of security differs for each credit card provider. Once a customer attempts to add a card, Apple connects with the relevant bank, sending them encrypted credit card data as well as some information about the customer’s iTunes use and other data about their phone. The bank then imposes its own authentication checks. Sometimes, this will require a phone call during which the user has to provide more information, or they will be sent a verification code by email or text.
To find out how far banks and credit card providers went to get extra verification from users, Pindrop researcher David Dewey asked his co-workers to donate cards to be uploaded on his own Apple Pay account. Dewey, acting as the “bad guy”, ended up testing cards from four different banks, though he isn’t revealing which ones. Doing so, he effectively proved controversial claims made by mobile payments consultant Cherian Abraham a year ago that Apple Pay could be used for such fraudulent purposes. Though it’s unclear if it’s as “rampant” as Abraham suggested, it’s also doubtful banks have expended additional effort to prevent stolen credit card use on the platform since early 2015.
With one provider, all Dewey needed was the credit card number, CVV and expiration date – i.e. what was provided on the card and the kind of data frequently traded on criminal web forums. In another case, it appeared the provider would only carry out an extra check if the name on the Apple Pay account was not the same as the cardholder. A simple switch to a new name and no additional checks were made.
Even where he was forced to call, the Pindrop researcher, who will be revealing his findings at the RSA Conference in San Francisco today, was able to pass security questions with information gleaned from simple Google GOOGL +1.51% searches for his colleagues.
FORBES contacted a host of major banks who work with Apple Pay (the list is now extensive). Two replied, but their responses weren’t exactly clear. “We do authenticate to protect customers against fraud, but we don’t disclose any more detail than that publicly for security reasons,” said a spokesperson from Chase bank.
Wells Fargo WFC +0.12% said customers “may” be asked to provide additional verification, but couldn’t say when it determined that extra step was necessary. American Express AXP +0.92% didn’t respond to a request for comment but it used similarly vague language in its Apple Pay FAQs: “For security purposes, we may ask you to enter a one-time Verification Code to confirm your identity.”
Bank of America has a similar process, but it’s unclear if it’s always enforced. A spokesperson said: “We use a variety of methods to verify cards when loaded, but we don’t go into specifics for reasons of security.”
Capital One gave a vague response too, a spokesperson commenting: “Capital One uses a variety of authentication methods that depend on the channel being used and risk presented. These include, but are not limited to, biometrics, secure one-time-PIN, and traditional username/password.
“Of course we continue to invest in steps to prevent fraud, including monitoring, new authentication methods and other controls.”
Fraudsters often get around these codes by social engineering banks anyway, said Dewey, noting it was “very common” for an attacker to call and change email or phone contact details before adding a stolen card.
According to Dewey, Apple Pay is currently the easiest way around the protections offered by EMV chips. “If you want a quick and dirty way, this is it,” he added. “Fraudsters and hackers are like water: they’re going to take the easiest path to get what they want. Right now, this is that easiest path… There’s no point of even trying to find a vulnerability in EMV because this works so well.”
Given Wells Fargo and Bank of America are planning on letting users draw funds from ATMs using Apple Pay, the situation could become even more severe if extra precautions aren’t enforced.
Apple mistakes?
Though banks’ protections were patchy, Dewey believes Apple is not entirely blameless either. He claimed that competitors Samsung Pay and Google’s Android Pay go a little further to protect users from such fraud. “We tried the exact same cards between the others, and security issues that arose in Apple Pay were not present in Samsung [or Google],” he added.
In particular, Apple does not implement what’s known as “rate limiting” in the service, Dewey claimed. This stops hackers making as many guesses as they need at certain data they might be missing. Often, cheaper carding deals see just the card number and expiration date provided, with the crucial CVV number missing.
Apple had not provided comment at the time of publication. But FORBES understands whilst Apple doesn’t prevent brute force attempts via Apple Pay, it does flag such attacks with banks when they occur. It’s then for the bank to take action, which is in line with the Cupertino giant’s policy on leaving much of the fraud prevention to the financial bodies.
But Dewey created a tool that would quickly guess the correct CVV number of a credit card being uploaded to Apple Pay. As he noted, there are only 1000 different combinations of three digits, something a computer can run through in seconds.
Dewey noted that communication goes through Apple Pay blindly and with some providers there’s no protection for such “brute force” guessing of such information. Rather than rely on banks, Google and Samsung prevent these attacks in their respective apps, he said. “[Google and Samsung] have been much more diligent about preventing automated brute force attacks.”
“If you know how to do this, you can run very quickly through a set of cards that you have,” added Vijay Balasubramaniyan, CEO and co-founder of Pindrop, which recently secured a $75 million Series C in an investment round led by Google Capital. “Apple Pay is standing on the shoulders of so many different security systems …there’s not one fundamental flaw, it’s just bad security design.”
Source: Forbes Tech
No comments