The 2020 election will be unprecedented in many respects. More people will be voting by mail, and there will likely be more democratic participation online than ever before. Internet platforms and communication services will become more influential forums as people are restricted from in-person conventions and debates. But even before the pandemic pushed these operations online, some lawmakers were already seeking to monitor misinformation in political discourse on the internet, specifically in the context of deepfakes. Deepfakes are manipulated photo, video or audio clips generated by computers, often with the assistance of artificial intelligence algorithms. They can make someone appear to say or do something that never actually happened or create realistic images of people who do not exist.
Have you ever been startled by the buzzing sound of a passing swarm of angry mechanical bees as you work from home? Have you ever looked out your window and noticed an agile device zipping through your property? If so, drones might be aversely affecting your lifestyle. These little devices once only lived in the imaginations of science fiction writers, but nowadays, they are popular gadgets that many parents routinely buy for their kids during Christmas. The popularity of drones has exploded due to cheaper production costs, advancements in camera and wireless technologies, and the appeal of high-quality bird’s-eye view footage popularized by aspiring vloggers looking to create impressive visual content. Recently, the COVID-19 pandemic has further fueled drone popularity due to their potential in the context of robotic delivery services. However, despite its advantages, drone technology poses a significant threat to property and privacy rights; luckily, the law offers several grounds to obtain legal remedies if such rights are infringed.
As the world continues to deal with the unprecedented challenges caused by the COVID-19 pandemic, Artificial Intelligence (AI) systems have emerged as a potentially formidable tool in detecting and predicting outbreaks. In fact, by some measures the technology has proven to be a step ahead of humans in tracking the spread of COVID-19 infections. In December 2019, it was a website-leveraging AI technology that provided one of the key early warnings of an unknown form of pneumonia spreading in Wuhan, China. Soon after, information sharing among medical professionals followed as experts tried to understand the extent of the unfolding public health crisis. While humans eventually acted on these warnings, the early detection enabled through use of AI-supported data aggregation demonstrates both the promise and potential concerns associated with these systems.
I am not at all embarrassed to admit that I love working in my pajamas. A lot of us are working from home now to help flatten the curve, and while social distancing has created a lot of challenges for most people, one of the perks is the ability to socially distance yourself from your hairbrush and roll into the office in your sweatpants. I don’t think I’m alone on this. According to one survey, 60% of office professionals report a better work-life balance when working from home, and 74% of workers would like to telecommute more often after social distancing restrictions are lifted. Not coincidentally, the widescale adoption of telecommuting has resulted in a corresponding uptick in employee monitoring tools.
‘Contact tracing’ is a process used by public health officials to identify individuals who may have come into close proximity with a contagious virus, such as COVID-19. Traditionally, infected persons are asked to identify interactions with people whilst infected or in the days leading up to infection being diagnosed. Health practitioners can then contact those at risk to warn them of potential exposure, what steps to take and how to avoid infecting others.
With the shelter-in-place orders imposed by the local and state governments, businesses are scrambling to transition to a virtual workforce and facilitating employees to work remotely from home. Educational institutions are no exception. School administrators and teachers have been working hard to create and implement plans to educate students at home, including maintaining a classroom curriculum through online platforms and incorporating daily or weekly interactions with the teacher and classmates through video chat or remote conferencing services.
With over one billion monthly active users, chances are that you have heard of the wildly popular TikTok platform that is now owned and run by the Chinese company ByteDance. Allowing its users to live-stream anything from their latest lip-syncing battles of their favorite pop artist’s songs to controversial video content of government protests or operations—TikTok has understandably caught the attention and ire of governments (and parents) throughout the world.
We’ve previously written about doxing and how it can be used by both vigilante social activists and malicious cyber bullies. Recently, in a first-of-its-kind ruling, the U.S. District Court for the District of Columbia concluded that white supremacists using social media to target and harass American University’s first female African-American student body president were liable to her for over $725,000 in damages.
Efforts to regulate cross-device tracking have increased since we last addressed the topic in 2017, following the release of the FTC’s Staff Report. Significant developments include the implementation and enforcement of the EU’s General Data Protection Regulations (GDPR), and the fast-approaching implementation deadline for the California Consumer Privacy Act (CCPA). These regulations, while not targeting cross-device tracking specifically, seek to limit the way in which consumer data is tracked and sold.