Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
The UK Data Protection Authority, the Information Commissioner’s Office (ICO), has published an update report on privacy issues around real-time bidding (RTB) and programmatic advertising. The report is a progress update on the ICO’s investigation into the AdTech industry, which it says is one of its regulatory priorities.
For any company that has tackled GDPR compliance, the new privacy rights introduced by the California Consumer Privacy Act of 2018 (CCPA) will seem pretty familiar. It might even be tempting to assume that by being GDPR compliant, one is already most of the way there in terms of preparing for the CCPA. In “Countdown to CCPA #2: GDPR Compliance Does Not Equal CCPA Compliance,” colleagues Catherine D. Meyer, Steven Farmer, Fusae Nara and Rafi Azim-Khan explain how, similarities aside, there are significant differences between the two privacy laws.
Protecting consumer data privacy in the age of artificial intelligence and increased digital commerce is a growing concern. In June 2018, the California Consumer Privacy Act (CCPA) introduced provisions to protect consumers and became the first U.S. law that can be viewed as a response to GDPR. Going into effect on January 1, 2020, legislation of this scope has far-reaching tendrils that may breed unintentional consequences.
No one knows your face as well as your iPhone does. All the unique variances of your face that make it yours and yours alone, these are all data points that your iPhone uses to unlock your phone using a face in place of a thumbprint. This same data that the iPhone collects can be used by the underlying tech—facial recognition technology—in a vast array of applications, from border control to photo tagging to law enforcement. But is this data (the measurement of the space between the eyes, the texture of the skin, etc.) open data? Or do individuals have a right to protection of an image of their face?
Do you like getting your news online, sharing videos or tweeting memes? A little piece of legislation known as The European Union Directive on Copyright in the Digital Single Market may signal the end of some of the internet’s simple pleasures. On September 13, the European Parliament approved new legislation that would overhaul the region’s approach to copyright law. As with the EU’s privacy regulations, the legislation could have an impact far beyond Europe, redrawing the lines of liability that exist between poster, publisher and platforms. Not surprisingly, technology companies and publishers like Google, Amazon, and Wikipedia strongly opposed the legislative changes.
The European Parliament adopted a resolution earlier this month to suspend the EU-U.S. Privacy Shield agreement. The Privacy Shield is a protocol that provides for the exchange of personal data between the EU and the United States for commercial purposes. Adopted in 2016 after the European Court of Justice invalidated the Safe Harbor arrangement, the shield is intended to safeguard the “fundamental privacy rights” of European citizens with respect to data transfers between signatory countries.
Every day, millions of people are being unwittingly recorded by others. Every person you see walking down the street likely has a means to record your image and transmit it to billions of people at a whim. But, would you have ever expected that your Lyft or Uber ride was being broadcast across the globe for others’ entertainment? For some passengers in St. Louis, this was their reality.
If you haven’t seen Sundar Pichai’s presentation on Google Duplex, watch it. The technology is fascinating.
Google is developing software that can assist users in completing specific tasks such as making reservations by telephone. The software uses anonymized phone conversations as the basis for its neural network and in conjunction with automated speech recognition and text-to-speech software can have independent phone conversations with other people. Incredibly, the software requires no human interaction—at least by the user requesting the service—to complete its task. The result is that you can task the software to setup a haircut appointment for you, or book a table at a restaurant where it is difficult to get reservations, with no further input needed. It can also work with different scheduling options if your preferred time is not available. And importantly, the conversations seem natural—it is very difficult to tell that one of the participants in the conversation is a computer.
If there’s a golden rule for the online age we live in, it’s “Always assume anything you post online will be visible to all.” Just like the original Golden Rule, it’s a maxim ignored often enough to bear repeating and frequent illustration. With that in mind, let’s check in on recent developments regarding social media revealing details its users would rather conceal—bankruptcy edition.