By Matt Tank
Not too long ago, while driving home, I was pulled over by the police. They asked to see my driver’s licence. Coincidentally, it had expired… Except that it wasn’t coincidence. The officer explained that my licence plate was automatically read, and flagged that the car’s owner (me) did not have a valid driver’s licence. My wallet $500 lighter, I was told to make my way home a certain way, avoiding the overhead cameras that would issue me another fine, using the same technology.
Thinking about the end-to-end process, it is clear that the police department, and any other entity who data is shared with, would know:
- Who I am
- My licence and registration details
- My contact details (gotta send the fine somewhere…)
- My daily movements (what location and at what time) – based on a number of data points
- My driving and criminal history
None of this is particularly surprising, and most of this data is relevant for the purpose it is being collected for. Thing is, what happens if someone accesses this data for different purposes?
Notwithstanding the “there’s no problem if you’ve got nothing to hide” school of thought, this information could be dangerous. Most of us know people who we would not want to give our contact information to; thieves would be interested in knowing who is out of the house at what times; driver’s licence information could be used in identity theft. Not to mention the fact that this is just a tiny percentage of the information that is collected about us by governments and corporations. That “free” wi-fi at the shopping centre? You pay for that with information about your movements through the shopping centre (and usually your email address and other contact information – to be shared with their tenants). Loyalty cards = Shopping habits. Facebook Likes = targeted advertising. Online Tax returns = financial information (bank accounts, super, employment).
In many ways, the huge amount of data that we can collect will benefit us greatly, even if that’s not immediately obvious. Think about targeted advertising. The better we can be targeted (using data), the easier it is to inform us about products we’re interested in or need. It reduces the effort we need to expend to find the right products, by predicting what we are likely to buy. Our health records could be used to inform diagnosis, using machine learning and similar technologies.
On the flip side, the same data can also be easily misused. Targeted advertising can be used to exploit us (and in many cases already is) – by delivering advertising that influences us to buy things that we don’t need, or to believe harmful ideas (terrorist ideologies, conspiracies). Our health records can be used to discriminate against us in a number of ways, by causing embarrassment or impacting insurance premiums, employability, etc.
We have already seen firsthand that if there is an opportunity for data to be misused, it will be. Political influence campaigns are becoming more pervasive, and data aggregators such as Facebook have allowed from them to be targeted more effectively. Organised criminals routinely steal our bank account data for financial gain. Data breaches can cripple companies, influence elections, and put people in real danger by releasing their contact information.
So do we bite the bullet then? Should we accept, as a number of prominent technologists do, that privacy will soon be a concept of the past, and society will need to adjust? For me, the only way that it could even remotely work is if the data was truly open, and that’s just not the way it works at the moment. The result is that those who have access to the data hold a great deal of power over those who don’t have access. This is why we need to do whatever we can to protect the idea of privacy, even while more and more information about us becomes available.
If we are to protect privacy, where do we start? It’s got to be a multi-faceted approach – there is no one solution. Government must play a part, as well as experts, but we also must understand that we have a responsibility to protect our own information and privacy.
For organisations, data is a valuable asset. To put it in perspective, as of today, Tesla’s market capitalisation is $US59 billion. This makes the company more valuable than BMW, despite only having a tiny fraction of the annual sales (22,000 units in 2017 compared to 2 million). Why?
A large part of the answer is data. The value of the data collected from Tesla’s Autopilot system more than makes up for the sales discrepancy. It’s also the reason that tech companies hold spots 1-7 in the list of the world’s most valuable public corporations.
The value of data is in its use. It is worthless if it is just sitting on a computer somewhere. This provides companies an enormous incentive to misuse or sell data, and means that companies that self-regulate are putting themselves at a disadvantage compared to the more unscrupulous ones.
If companies cannot be relied upon to self-regulate (and consumers cannot be trusted to read terms of service and make decisions appropriately) then it falls on governments to regulate them. Regulation needs to provide tangible and comprehensive protection, while still allowing for the benefits of data collection to be realised. To this end, data regulation should include the following protections:
1. Individuals’ data should always belong to the individual, unless those rights are explicitly waived.
If I submit a video to YouTube, it should always remain my property – this should be obvious. Less obvious is that if I click on an ad in Facebook, the data generated by that click, which will be used for personalising future advertising, should also belong to me. I should be able to request a copy of all my data stored by the company, and I should be able to request its deletion or depersonalisation. Where depersonalisation of data is used instead of deletion, the process should be clearly described, and it should be clearly demonstrable that retained data cannot be used to identify the individual.
2. If data is to be shared with third parties, the scope and purpose of that sharing of data should be clearly communicated to me. Data sharing agreements are not transferable.
If I have health information stored online, it shouldn’t imply that my doctor’s office, along with every other health care provider in the state, has automatic access to that data. I should have to provide consent for my data to be shared with my doctor, and there should be a mechanism for seeing who my data is shared with and revoking consent at any time. Of course for some businesses, data sharing is a key part of their model, so they would need to be able given the power to modify or suspend my service if I revoke consent, but at least it will all be transparent. It is important that consent is not transferable. If I agree for my data to be shared with a third party, the third party would not be allowed to share it, unless separate consent is provided. The primary service provider’s data sharing agreements would need to make this clear and enforceable, particularly where the data sharing occurs across jurisdictions, outside of the scope of the regulations.
3. Personal data should be appropriately protected.
All personal data should be treated as sensitive and confidential. There should be a minimum level of protection for this data, and in the event of a data breach, companies should be held to account for any failure to adequately protect or secure personal data.
4. Personal data should only be retained where required
If you are required to provide data to identify yourself when you set up an account, for example, that data should only be retained for as long as it is required to complete the identification process, unless it is also reasonably used for future verification. In the event of a data breach, or when assessing compliance, the data types held by the companies should be audited and if a reasonable explanation for its storage cannot be provided, the company should be held to account.
With so many use cases for personal data, and so many business models, regulation can only go so far. In addition to regulatory requirements, organisations should consider the privacy of consumers when implementing systems and processes, and the experts who are responsible for designing and building these systems should be advocating for making privacy protections a priority, in the same way that system security, or keeping backups is a priority. IT professionals like myself have a responsibility to make sure personal data is protected, not just from outsiders, but also from internal staff, and from organisational overreach.
If personal data is designed to be used by the system, then we need to ensure it is only fully accessible by the system, and not by the people overseeing the system. On the other hand, if people are required to access personal data, they should only have access to the data required to complete their task, their access should be individually logged, and alerting should be used where the access is suspicious (ie, data is accessed for the stated purpose of completing an insurance claim, but no corresponding claim is ever entered into the system). These protections should not be overlooked for the sake of expediency, because the business processes aren’t in place, or because “things might change later”.
What You Can Do
As with most things, no matter how many protections are in place, you can’t stop personal data from being misused entirely. Regulation can only be adopted so widely, there will always be organisations that try to game the system, and in a world where you have control over your data, you also have the power to voluntarily allow it to be misused (usually without fully understanding the consequences) . So what can you do to help safeguard your own information?
1. Try to understand the motives of the organisation collecting your data.
Tesla wants to see how you drive, and teach its autonomous vehicles to drive the same way, in order to sell cars. YouTube wants to sell advertising, firstly by keeping you engaged with the platform for as long as possible, but also by using your viewing habits to target you with particular ads. Many mobile game companies design games that are not much more than Skinner Boxes, to keep you engaged, and then encourage you to spend money. Don’t be too cynical, but remember that while it might be in companies’ best interests to provide you with value, they only exist because they provide themselves with value. If the costs outweigh the benefits, be willing to let it go.
2. Familiarise yourself with privacy controls and policies.
Facebook is a good example of many of the good and bad aspects of data privacy. On the plus side, they provide you with a good set of tools to control what data you share with whom. On the negative side, these tools are fairly complicated, and often default to a more insecure, or open, setting. However, if you take the time to learn how these tools work, and are prepared to revisit your settings regularly, you can enjoy the benefits of the platform without sharing too much data.
Although it seems like a chore, make sure you read the terms of service to find out how your data will be used. This is particularly important if the data you provide to the service is sensitive or identifies you personally.
3. Be aware of what you are sharing, and with who.
Keep in mind you may be sharing more than you intend. This is especially true with images and video, where you can inadvertently capture something unintended in the background.
4. Don’t assume that loss of privacy is inevitable.
If enough of us follow the previous suggestions, we can make it clear to organisations that they must respect consumers’ privacy to run a viable business. We can talk to our federal politicians to make sure they are aware of the issues and suggest they take action. Finally, don’t vote for candidates who are completely out of touch with new and emerging technologies. Government will always lag behind technological progress, but this should be minimised as much as possible.