On February 8, 2023, the U.S. Department of the Treasury released a report citing its “findings on the current state of cloud adoption in the sector, including potential benefits and challenges associated with increased adoption.” Treasury acknowledged that cloud adoption is an “important component” of a financial institution’s overall technology and business strategy, but also warned the industry about the harm a technical breakdown or cyberattack could have on the public given financial institutions’ reliance on a few large cloud service providers. The Treasury also noted that “[t]his report does not impose any new requirements or standards applicable to regulated financial institutions and is not intended to endorse or discourage the use of any specific provider or cloud services more generally.”
Modern cloud computing only came into existence about 20 years ago, but now virtually all enterprises (99%) are using cloud services. Cloud adoption accelerated further in the last two years because of the COVID pandemic as a result of an increase in remote work, the evolution of online business strategies (e.g., e-commerce), and the focus on business resilience. In addition, given budget uncertainties, moving technology tools, data and storage to the cloud usually results in significant cost savings to an organization, which is the top priority for organizations using cloud services six years in a row.
The last decade saw explosive growth in enterprise migration to the cloud, a trend driven by the promise of lower overhead costs and greater scalability. Given this, many have made the leap and moved both non-mission-critical workloads and mission-critical functionality into the cloud.
This is where “data gravity,” a phrase coined by Dave McCrory comes into play. Data gravity is the “effect that attracts large sets of data or highly active applications/services to other large sets of data or highly active applications/services, the same way gravity attracts planets or stars.” So, in the simplest terms, data gravity is the idea that increasing volumes of data can cause data to function like an anchor, making it increasingly difficult to move as the data in question continues to increase.