With the shelter-in-place orders imposed by the local and state governments, businesses are scrambling to transition to a virtual workforce and facilitating employees to work remotely from home. Educational institutions are no exception. School administrators and teachers have been working hard to create and implement plans to educate students at home, including maintaining a classroom curriculum through online platforms and incorporating daily or weekly interactions with the teacher and classmates through video chat or remote conferencing services.
With over one billion monthly active users, chances are that you have heard of the wildly popular TikTok platform that is now owned and run by the Chinese company ByteDance. Allowing its users to live-stream anything from their latest lip-syncing battles of their favorite pop artist’s songs to controversial video content of government protests or operations—TikTok has understandably caught the attention and ire of governments (and parents) throughout the world.
We’ve previously written about doxing and how it can be used by both vigilante social activists and malicious cyber bullies. Recently, in a first-of-its-kind ruling, the U.S. District Court for the District of Columbia concluded that white supremacists using social media to target and harass American University’s first female African-American student body president were liable to her for over $725,000 in damages.
Efforts to regulate cross-device tracking have increased since we last addressed the topic in 2017, following the release of the FTC’s Staff Report. Significant developments include the implementation and enforcement of the EU’s General Data Protection Regulations (GDPR), and the fast-approaching implementation deadline for the California Consumer Privacy Act (CCPA). These regulations, while not targeting cross-device tracking specifically, seek to limit the way in which consumer data is tracked and sold.
Much like humans, bots come in all shapes and sizes. In social media networks, these bots can like what you post and even increase your followers. Companies use bots for all types of things—from booking a ride to giving makeup tutorials. Some bots can even solve your legal problems. Besides saving time and money, bots have the potential to reduce errors and increase a business’s customer base. But what happens when bots spy on users and share personal information? Or when they make racial slurs and offensive comments?
The UK Data Protection Authority, the Information Commissioner’s Office (ICO), has published an update report on privacy issues around real-time bidding (RTB) and programmatic advertising. The report is a progress update on the ICO’s investigation into the AdTech industry, which it says is one of its regulatory priorities.
For any company that has tackled GDPR compliance, the new privacy rights introduced by the California Consumer Privacy Act of 2018 (CCPA) will seem pretty familiar. It might even be tempting to assume that by being GDPR compliant, one is already most of the way there in terms of preparing for the CCPA. In “Countdown to CCPA #2: GDPR Compliance Does Not Equal CCPA Compliance,” colleagues Catherine D. Meyer, Steven Farmer, Fusae Nara and Rafi Azim-Khan explain how, similarities aside, there are significant differences between the two privacy laws.
Protecting consumer data privacy in the age of artificial intelligence and increased digital commerce is a growing concern. In June 2018, the California Consumer Privacy Act (CCPA) introduced provisions to protect consumers and became the first U.S. law that can be viewed as a response to GDPR. Going into effect on January 1, 2020, legislation of this scope has far-reaching tendrils that may breed unintentional consequences.
No one knows your face as well as your iPhone does. All the unique variances of your face that make it yours and yours alone, these are all data points that your iPhone uses to unlock your phone using a face in place of a thumbprint. This same data that the iPhone collects can be used by the underlying tech—facial recognition technology—in a vast array of applications, from border control to photo tagging to law enforcement. But is this data (the measurement of the space between the eyes, the texture of the skin, etc.) open data? Or do individuals have a right to protection of an image of their face?