Hitachi Data Systems: 'right to be forgotten' is a pipe dream
Policy makers do not understand the technical issues, says HDS's chief technology officer for EMEA
By Sophie Curtis | Techworld | Published: 10:27, 30 July 2012
The European Union's proposal to give internet users the “right to be forgotten” is unfeasible, according to Bob Plumridge, chief technology officer (CTO) for EMEA at Hitachi Data Systems.
At the start of this year, the European Commission proposed a new law that would allow people to demand that organisations that hold their data delete that data, unless there are “legitimate” grounds to retain it.
European Union Justice Commissioner Viviane Reding said that the new rules would help build trust in online services because people would be better informed about their rights and in more control of their information.
Related Articles on Techworld
While the intentions behind this proposed legislation are good, Plumridge said there is a gap between what policy makers would like to do and what technology is actually capable of doing.
“Say someone comes to me and says, I want you to erase all the information you hold on me. I could do that off the online systems pretty easily. But various studies show that, on average, most corporates have up to nine copies of any unique data,” he says
“Some of that data could be locked in a vault in a mountain somewhere because that’s the ultimate DR protection. So do I have to go and delete those individual records every time somebody says please delete everything you hold on me? Doesn’t sound very practical to me.”
Plumridge believes that the people who are coming up with ideas to improve privacy do not understand the limits of today's technologies. He said that, for the right to be forgotten to become a reality, all of these individual records would need to be linked together.
“You would have your master copy, and every time a copy of that is made, there would be a link between the original and the copy. So if the original is altered, this modification is cascaded through all the copies,” he says.
However, having links between all copies of the data means that if one copy is corrupted, all the others are at risk of being affected. Part of having a good disaster recovery policy is ensuring that back-up copies are not affected by any event that befalls the master copy.
Furthermore, some businesses are required to hold onto data for compliance reasons. For example, the Financial Services Authority (FSA) requires banks to keep details of all banking transactions for a number of years, so they would not be able to comply with a deletion request, even if that customer had left he bank.
“I think what’s happening is these things are coming out of the policy-making side,” he says. “People are starting to look at them and say is that really practical? Good idea but maybe we can't do it now. Maybe we’ll be able to do it in two years' time.”
Data and the real-time economy
The question of how personal data should be used is becoming increasingly important in the real-time economy. Companies with access to your data are not only able to see who you are and how to contact you, but where you are right now and what you are doing – information that is potentially very sensitive.
This information is extremely valuable to companies, because it gives them deep insight into who their customers are, allowing them to tightly focus their marketing efforts and respond quickly to customer demand. However, collecting data is not an end itself – the key is to know how to make use of that data, while still protecting customers' privacy.
Some companies are now employing specialist data scientists to analyse their data and compile reports that will inform future corporate policy. This often involves depersonalising data, so that it cannot be connected with a particular individual.
Plumridge says that, while this is an interesting concept, it will ultimately become impractical, as volumes of data grow. Hitachi has therefore been sinking large amounts of money into R&D around analytical engines, data protection and data linkage, that can automate data processing.
“I think it's inevitable that the Big Data revolution will rely on machine-to-machine communication,” he says. “I just cannot see how an individual would be able to trawl through what is potentially megabytes, and even potentially terabytes of data. Even if they could, the result they come up with would be so long after the event that the result would be almost worthless.”
He gives the example of the Japanese bullet trains, which are built by Hitachi. In the past, employees had to walk along the track at night and identify where maintenance work needed to be done. However, about nine months ago, the company started mounting digital CCTV cameras externally on the trains to record the condition of the tracks as they go along.
When a train arrives at its destination it offloads the data to a server running an analytical engine, which goes through the data and identifies areas of the track that require maintenance work. That information is then packaged up and sent directly to the various maintenance depots along the length of the track.
“Within an hour of a train pulling into the station, the analytics stuff has been done and the maintenance work is prepared. This also means that the maintenance teams know exactly what has been done and why it was done, and that performs part of their archive of the rail system,” he explains.