Wall Street Journal Online - May 11, 2006 Few Americans say they have been victims of identity fraud, according to a new Wall Street Journal Online/Harris Interactive personal-finance poll. But a strong majority of them are taking steps to prevent the theft of personal and financial information that can lead to fraud.
About 3% of 2,120 adults polled said their identity has been used to open a phone, utility or other type of unauthorized account; 1% had a mortgage or line of credit opened in their name; and 2% said their personal information was used for nonfinancial fraud. Another 6% reported some other type of identity theft. Still, nearly three-quarters of respondents said they watch for suspicious activity on their accounts or shred mail that contains their account numbers.
The Chapell View There's a lot of interesting data in this report, but there are a couple of points that seem particularly striking:
- First, one in four consumers report that they have begun to limit their "online banking transactions." This may make some amount of senese, given that banking information is some of most sensitive that consumers use online. But 80 % also say that they trust banks to "prevent others from accessing [their] sensitive personal information or account number." If so many consumers trust banks in this ways, why are they choosing to scale back their online banking?
- Second, retailers fare rather badly in the poll. Only 43 % of respondents said that they trusted retailers to safeguard their information. In addition, 30 % reported restricting their purchases from online retailers. Given the amount of purchases consumers make from retailers every day - and the limited number of data breaches caused by retailers (notwithstanding DSW, Sam's Club, and so on) - this is somewhat surprising. What reason to consumers have to distrust retailers over other businesses? And do consumers really feel so insecure every time they charge a latte at Starbucks?
Both of these questions have been asked before, as more than one report (for example, last year's Consumer Reports WebWatch poll) has made it clear that consumers view the online space as less than entirely conducive to their privacy and security. What is clear, I think, form this WSJ / Harris poll, however, is that the distrust that many consumers feel for the internet is beginning to bleed over into other areas of their business interactions.
Although only a minority of consumers will ever be victims, Identity Theft is a real enough threat, and consumers are rightly nervous. There is also the perception that identity theft is more likely to occur because of a consumer's online (rather than offline) actions. So although consumers may trust their banks, they're hardly sure if online banking is safe; and when it comes to businesses with whom they are have a less established relationship - such as retailers - consumers would much rather be safe than sorry.
When unsure of the risks, 'safe' means providing less personal information, and shopping less, online.
Consumers also reported taking a number of active steps, including checking their credit reports, shredding mail that has account numbers, and so on. These are all quite useful and effective methods of limiting identity theft, but only when ID theft occurs because of a consumer's own actions. In many cases, however, we're finding that ID theft occurs because of business mistakes (see last week's theft of 26 million SSNs at the Department of Veteran Affairs), and there's nothing a consumer can do.
Unfortunately, many consumers feel that the best way to avoid ID theft is to simply limit the information they provide online. This is a potential danger for any online business - and it hits especially close to home for those in the online marketing space. As John Greco, president of the DMA, put it in April, "Consumers are increasingly concerned about the level of trust they can place on marketers." Greco has argued that the way to build consumer trust (and avoid penalizing legislation) is to self-regulate the use of consumer data. Marketing data, he argued, should be used in line with "three r's: relevance, responsibility, and results." Although these are admittedly somewhat vague terms, what I think Greco is getting at is the idea of demonstrating value to the consumer.
Online marketers are hurt by consumer distrust because security fears lead them to provide no data at all to businesses - including less sensitive marketing data. If marketers can, through an effective self regulatory program, demonstrate that they are providing value from that provided marketing data, then this distrust may be limited. Which is to say, if a marketer learns that I'm interested in buying a new car, and uses this to provide valuable advertisements, is open about the knowledge he has about me, and ultimately leads me a purchase, I'm a lot happier.
If, on the other hand, I were to receive broadly targeted ads that lead nowhere, with no explanation, I might start to worry about who knew what about me.
Overall, this sort of industry self-regulation may go a long way to help consumers see that they gain some benefit from shared data - and let them focus on the steps they can take to ensure the safety and security of their data. Without such assurances - and something of a push in the right direction - they're likely to remain convinced that it's better to be 'safe' than 'sorry.'
Washington Post - May 23, 2006 As many as 26.5 million veterans were placed at risk of identity theft after an intruder stole an electronic data file this month containing their names, birth dates and Social Security numbers from the home of a Department of Veterans Affairs employee, Secretary Jim Nicholson said yesterday...A career data analyst, who was not authorized to take the information home, has been put on administrative leave pending the outcome of investigations by the FBI, local police and the VA inspector general, Nicholson said.
The Chapell View I used to live in Los Alamos, NM - a town pretty much centered on Los Alamos National Laboratory, a federal research facility. It was very common for people in the town be working with classified and secure data, and while I lived there, I heard about more than one loss of sensitive data that shut down the lab for days.
In each case, it seemed, it wasn't that the data was accessed by hackers or left unencrypted. An employee would have moved computer disks somewhere, or brought home an unauthorized laptop.
I'm reminded of this because the recent breach at the Department of Veteran Affairs didn't stem from any lack of technological security. No - an employee had taken a laptop home from work in order to finish a project - which was then stolen in a burglary. Unfortunately, it's this simple sort of mistake that can have enormous consequences - whether it's the loss of classified information (as happened in Los Alamos) or more than twenty million sensitive consumer records.
As CNET reports, this data was mostly made up of social security numbers (SSNs) - and is likely the largest theft of SSNs ever. It's pretty clear that SSNs, the basic identifier for many aspects of modern life (medical care, bank accounts, college admissions, et cetera) are becoming more and more at risk. I don't know if I agree with Avivah Litan, a security analyst at Gartner, who is quoted by CNET as saying "One out of seven Social Security numbers is in criminal hands...You can't rely on them anymore." I do know, though, that the risks associated with using SSNs have led some colleges to replace them with generic ID numbers - such as NYU did, after its own breach was caused by an employee error.
So I'm glad to see that the VA is going to require employee training as a result of this breach. It's unfortunate, however, that it took this massive of a breach to cause it - and that previous reports on data security, as cited by the Washington Post, had focused on the threat of someone hacking into VA computer.
Sometimes, all the data security in the world can mean very little without consistent employee training.
Reuters - May 20, 2006 Four years ago, a small e-mail campaign saved a struggling coffee shop in Portland, Ore. Today proprietor Becky Bilyeu is among the thousands of people fighting to preserve the free flow of electronic mail. Bilyeu contacted the MoveOn.org political advocacy group earlier this spring when she heard that Time Warner's AOL, the largest U.S. Internet service provider, planned to start charging for guaranteed delivery of certain types of bulk e-mail. The fee--a small fraction of a cent per e-mail--took effect two weeks ago. AOL says it will help stop spam, or junk messages, from clogging their customers' inboxes. But many say e-mail should move freely so that people can build and maintain large communities over the Web. Nearly 500 organizations, from the Electronic Frontier Foundation to the Gun Owners of America, have joined together to create a coalition called DearAOL.com.
The Chapell View This is very much like the Net Neutrality debate. For one thing, each side's argument is imperfect... but it's also similar because the Goodmail debate asks essentially the same question as the Net Neutrality debate -- does the ISP have the right to discriminate between Internet Traffic?
(Btw, I find it interesting that Yahoo's answer to this question as it pertains to Net Neutrality is "NO", while their answer to the same question regarding Goodmail would appear to be "YES!" more on this another day...)
As a privacy guy, I understand the benefits of creating an ecosystem where emailers need to consider the costs prior to hitting the send button. Most bulk emailers (even the 'reputable' brands) would rather just hit the send button rather than put together a comprehensive permissions management program. For one thing, it's MUCH easier. So in theory, charging emailers might make some of them think twice before hitting SEND - and that's probably not a bad thing.
And I'm pleased that Goodmail has decided to release their standards, although I'd encourage them to provide additional transparency regarding how their reputation scores are calculated. (I'd also like for them to demonstrate that they are actually certifying to those standards -- but that is a challenge faced by just about any business certification program.)
One thing that hasn't received a tremendous amount of press is this -- Goodmail fundamentally will change the relationship between large email sender and ISP. Under the current ecosystem, the large email sendards are beholden to the ISPs. After all, the large email senders won't make any money if their messages don't get through. And theISPss are fairly agnostic - their only master is their subscribers, and if you tick off enough of their subscribers, your messages get blocked.
However, Goodmail fundamentally changes that ecosystem. Once theISPss start deriving revenue from the delivery of emails, they become beholden to the large email senders. Does that mean that AOL and Yahoo! might give the benefit of the doubt to a large email sender that is paying them lots of money? I hope not, but it's certainly a fair question...
On the other hand....
I don't entirely embrace the argument that says "email should move freely so that people can build and maintain large communities over the Web. " I like that argument - again, in theory. It's nice to think about harnessing the power of the Internet to 'do good' in this world. But for every online community that's built upon making the world a better place (or providing Swedish doglovers a place to talk about pet products, or whatever) there are FORTY other groups who insist that their right of free speech includes the right to bug the hell out of me by polluting my inbox with stuff I don't want. Perhaps I'm throwing the baby out with the bathwater here, but I increasingly am left with the sense that the baby has been drowned a long time ago...
Last week, I was invited to present to a group of email marketers in NYC. The luncheon was sponsored by the email marketing division of a large list and data company. It was positioned as a 'value ad' - so the parent company could demonstrate their commitment to privacy. I won't tell you the name of the sponsor, nor will I tell you any of the names of the attendee companies - but I will tell you that we're talking about prominent, top tier brands. I was joined by two other privacy professionals, whom I've known for some time. Also, most of the attendees (being emailers) were from the online divisions of their respective companies.
Anyway, I thought I'd share some of the audience' sentiments:
The ROI of Privacy was 'nice', but is anyone really doing any of this? - The luncheon attendees were very attentive, and asked some good questions, but the overriding theme of the afternoon was - "I like this in concept, but nobody is really executing this type of program!" This is admittedly a fair point. While there certainly are organizations that are able to affirmatively demonstrate the ROI of their privacy programs (HP being a good example), those organizations are probably not entirely willing to give away the secret sauce to competitors.
Great idea, so where's the turnkey solution? - While the attendees were interested in the concept of a ROI focused privacy program, they are generally looking for something they can simply implement. Few (if any) have the stomach (or the time) to take the necessary steps to build this type of program. One of the reason that the Spammer title fits most email marketers is because it's much easier to just hit the SEND button more often than it is to invest the time and capital to develop a real privacy and permissions program. Many attendees complained about fighting for headcount. (Recognizing that this is a challenge for just about all departments, I think it's particularly problematic for online divisions of companies - who've historically positioned their industry as cheaper, better, faster, leaner, Etc... The online ad space in particular is going through all kinds of growing pains as they attempt to find funding to build out infrastructure around governance.... but I digress.) My point here, is that there just isn't anything resembling a turnkey solution yet. Companies need to, for the most part anyway, build their own program from scratch. So until there's several demonstrable examples of robust (ROI driven) privacy and permissions programs, I think most organizations will sit on the sideline and continue to tap on the SEND button.
Great idea! I think my email/data provider should be offering that as a value ad service - I think this is a great idea. When I look out at Axciom, Experian, InfoUSA and other large data companies, I see an increasingly commoditized business. The first one of these companies that is able to demonstrate that they have the capacity to help build out an effective privacy and permissions program will have a HUGE advantages over the rest of the field. (In case any of them want to hire me to help build one of these programs, I'm all ears....)
CNET News.com - May 18, 2006 Net neutrality believers have officially ordained a celebrity poster child. Musician-turned-cafe-proprieter Moby turned up on Capitol Hill on Thursday to urge passage of a proposal by Massachusetts Democrat Edward Markey that would write Net neutrality principles into law. Sporting his signature dark-rimmed glasses, with his head clean-shaven as usual, the artist said that a world without legally binding Net neutrality principles would mean that today's "egalitarian" Internet would be privatized by large telecommunications companies.
The Chapell View I'll admit that I'm still not sure what to make of the Net Neutrality debate. One the one hand, it's difficult to be sympathetic to the ISPs. After all, it's not like this whole concept has snuck up on them. Colleagues of mine at Jupiter were talking about using the Internet to deliver phone and video almost ten years ago.
On the other hand, if technology and innovation are impacting the ISP's ability to make a profit (or to at least recoup their investment), then perhaps its reasonable to expect that those same ISPs will seek new ways to recoup their investments.
FTC Press Release - May 10, 2006 A title company that promised consumers it maintained "physical, electronic and procedural safeguards" to protect their confidential financial information, but tossed consumer home loan applications in an open dumpster, agreed to settle Federal Trade Commission charges that its inadequate storage and disposal procedures for sensitive consumer information violated federal laws. The settlement with Nations Title Agency, Inc., Nations Holding Company, and Christopher M. Likens bars deceptive claims about privacy and security policies, and requires that they implement a comprehensive information security program and obtain audits by an independent third-party security professional every other year for 20 years.
The Chapell View According to the FTC's release on the case, the financial services company National Holding Company (NHC) and its subsidiaries violated consumer protection laws in two significant ways. First, they failed to appropriately safeguard the data they collected - leaving it on easily accessible computer networks and even trashing it in open dumpsters. Second, they made false representations about the protections they claimed to (but didn't) afford to consumers' information.
Where this case is most demonstrative, I think, is in the reasons listed by the FTC as to why NHC had "failed to provide reasonable and appropriate security." These included failing to use appropriate website security and fraud detection methods. But just as importantly, the FTC cited a failure to "implement...employee screening and training and the collection, handling, and disposal of personal information," "assess risks to the information they collected and stored," and "provide reasonable oversight for the handling of personal information by service providers, such as third parties."
These three factors - employee training, risk assessment, and third-party oversight - are key elements of any privacy protection program. Unfortunately, as businesses - and especially online businesses - focus more and more on technology, companies can sometimes overlook these factors. Providing real protection, however, requires doing more than developing technology or building a secure network. There's some irony here: the technology of collecting and storing data is rapidly improving, and yet this may lead to an increase in possible security threats. Why? Many of these improvements involve storing data on third party servers (as desktop search applications, for example, do). How big these risks will be remain to be seen, but it's doubtful that technology alone with alleviate them.
The other part of the story is the FTC's increased willingness to go after companies that aren't addressing security issues. And businesses, I think, have reason to take note of this, whatever the final consequence. After all, without real risk assessment, it's hard to determine where possible threats are; without third-party oversight, and without employee training, even the best privacy practices can get lost in the shuffle (or dumpster). Third-party oversight is especially important. In recent months, businesses have been held responsible for acts they authorized third-parties to take on their behalf. Even if a business has put the proper privacy and security measures in place, they're going to want to make sure that anyone using their data is following the same guidelines and procedures.
Not all of this has to do with avoiding legal repurcussions, and it goes to the heart of why companies should respect consumer privacy for business reasons. Consumers want these protections, and are more likely to trust - and patronize - a business that enacts them. Chapell & Associates has often argued that privacy isn't just about technology - it's also about what businesses do with technology. The FTC, it seems, agrees, and is acting accordingly.
CNET News.com - May 05, 2006 A key Republican in the U.S. House of Representatives plans to find a way to force Internet providers to keep records of their customers' activities, an aide said Friday. The aide said Rep. Joe Barton of Texas, who chairs the House committee responsible for writing Internet and telecommunications law, has pledged to work on legislation related to mandatory data retention--a concept recently endorsed by the Bush administration as a way to crack down on child pornographers.
The Chapell View Now, I'm not saying that I advocate Representative Edward Markey's (D-MA) proposed alternative, but the impetus behind this proposed bill isn't all that clear to me. In fact, it seems the opposite extreme: instead of requiring businesses to delete a great deal - if not all - of the data they collect, ISPs would be required to retain customer information for at least a year.
The bill, an amendment to a broader telecommunications bill being debated, was written by Representatvie Diana DeGette (D-CO) with the ostensible goal of combating child pornography. Now, this is hardly a position that can be argued with. But there doesn't seem obvious reason why this should require ISPs to retain customer information for a year after the customer leaves their service. Moreover, according to CNET, a broad reading of the amendment might require just about any website to retain and store this sort of customer information.
As Congress debates a new and updated Telecommunications Bill - the first since 1996 - many provisions are likely to deal directly with the internet and online privacy. It's worrisome, however, that recent legislative proposals dealing with consumer privacy have taken one extreme (required deletion) or another (required storage). Online privacy is always a matter of balancing business - and government - needs against consumer interest. Markey's bill went too far in the latter direction, while DeGette's does just the opposite. As Chapell & Associates has noted before, collected data is often easily accessible by the government which is of course the intention of this very amendment.
As things stand, telecommunications law generally specifies that ISPs - like most telecommunications companies - must retain information for a limited period of time, and only upon government request. Perhaps this is restricting the government's ability to prosecute certain offenders. But there are definite risks associated with the broad collection and storage of consumer data, something that seems to have gone missing in the DOJ's (and now, Congresses') drive to have such data available to them as needed.
iMedia Connection - May 08, 2006 A Chapell Article After representatives from Google and Yahoo! sat down to discuss click fraud at the Search Engine Strategies New York conference this March, the issue seemed squarely back on the front stage. Maybe it never left, but it's clear that an increasing number of online marketers are expressing concern about the money they've spent on search ads.
But no matter how concerned these marketers are, it's principally a business problem, right? What in the world, you might be asking yourself, is a privacy guy doing talking about click fraud? Bear with me-- I'll get to that in a moment. Taking a look at the current state of click fraud - and the solutions that have been suggested - will make it clear how consumer privacy may soon become an issue...(more).
A Chapell Article I'd like to offer the two following statements. They're both true - and yet very clearly contradict each other:
- Behavioral Targeting is the future of online media. - Behavioral Targeting is a load of hype.
Thus, the paradox. Behavioral Targeting (BT) is a load of hype and in some respects should besubjected to cocktail hour mockery in the same way that concepts such as "One-to-One marketing" and "CRM" are often derided. BT has clearly been the unlucky recipient of thatdreaded curse of Internet business - it's been horribly over-promised and chronically underdelivered.
But it's also without question the future of online media. From Bob Garfield's Jetsonian world ofthe future, to Lorraine Ross of USA Today's program for dynamically serving web pages, BT is increasingly recognized as the penultimate way to reach that elusive customer at the right placeand the right time.
We all have a stake in overcoming this contradiction. So I'd like to share with you what I believeneeds to happen for BT to take off as promised - and how we can allay some of the factors that are currently helping to prevent that from happening.
This article was originally written for a popular trade magazine, but was heavily edited. As such, Chapell & Associates has decided to publish the aritcle in its original and unedited format through our website.
CSO Magazine - April 2006 Consumers say they want privacy online although they often behave in ways that contradict those statements. Could it be that most of the complaints come from privacy advocates and not consumers at all? Debating online privacy isn't merely a philosophical exercise. Companies collect reams of information about visitors to their websites and about their customers' Web-surfing ways.
In a recent forum at the Wharton School at the University of Pennsylvania, the panelists - who included Ravi Aron, Wharton professor of information management, Gil Brodnitz, a partner at Accenture, Bradley Horowitz, head of Technology Development for Yahoo's Search and Marketplace group, Steve Johnson, CEO of Choicestream, Declan McCullagh, a writer for CNET News.com, and Brooklyn Law professor Wendy Seltzer - discussed consumer concern about online privacy, data protection laws, and privacy policies.
The Chapell View The panel under review goes a little beyond what you often see at these sorts of events - instead of merely mentioning consumer concern about privacy, or taking it for granted, the panelists seemed interested in delving in a little deeper.
Their conclusions were, best I can tell, two-fold: 1) there's little guarantee that personal information provided to or collected by online businesses won't end up in the hands of the government, and 2) a lot the so-called consumer "concern" over privacy is overblown by consumer advocacy groups and politicians.
I think the panel is probably right on the first point. Notwithstanding that Google managed not to give the DOJ the keyword searches it requested (having won it's court case), as Professor Seltzer pointed out, they still did hand over a large number of URLs. Not to mention that in a separate case they were enjoined to provide the FTC with the entire contents of a Gmail account, and other search engines provided the DOJ with the requested keyword searches. "If we don't have a strong protection law, all we have is the company's word, and hype and fact don't always match," Seltzer said. "Anything that is collected in a regime of weak privacy laws is something that the government can get access to."
This is probably correct - and helps to explain why companies like Microsoft are getting behind national data protection and privacy legislation. At an event sponsored by the FTC and the BBB I attended this week, a few panelists noted that there isn't a federal requirement for websites. They echoed the Wharton speakers in arguing that this leaves the question of privacy promises between a business and a consumer in the hands of the market. Although there are good business reasons to write a solid privacy policy, it can mean that things are sometimes a little haphazard.
I'm less convinced, however, by the panel's second conclusion. McCullagh argued that it was "the privacy fundamentalists - the pro-regulation privacy groups" that were driving most of the debate about privacy, and that most consumers were perfectly happy giving up personal data if they think they're getting something in return.
Technically, consumer groups are the ones who get the press. But I do think that they sometimes do represent true consumer concern. And we can cite Amazon.com all we want - it doesn't mean that consumer aren't concerned, and aren't, as many studies have now shown, deleting cookies, scaling back their website purchases and increasingly downloading anti-spyware technology. Consumers are concerned because they often don't see the value that they are supposed be getting from giving up their data. Sure, when they do understand the bargain, they're much less concerned. But that's the fundamental problem: they often don't.
So until we make it clear to them what the trade-off is, their concern is going to remain quite real. Let's call a spade a spade here: if consumers don't have a reasonable expectation of privacy while they're online, then somebody (other than former SUN CEO Scott McNealy) should be telling them that. At the very least, shouldn't this be in a privacy statement?