Five risks of moving your database to the cloud

By incloudhosting.co.uk

Provided byEDBMoving to the cloud is all the rage. According to an IDC Survey Spotlight, Experience in Migrating Databases to the Cloud, 63% of enterprises are actively migrating their databases to the cloud, and another 29% are considering doing so within the next three years. This article discusses some of the risks customers may unwittingly encounter when moving their database to a database as a service (DBaaS) in the cloud, especially when the DBaaS leverages open source database software such as Apache Cassandra, MariaDB, MySQL, Postgres, or Redis. At EDB, we classify these risks into five categories: support, service, technology stagnation, cost, and lock-in. Moving to the cloud without sufficient diligence and risk mitigation can lead to significant cost overruns and project delays, and more importantly, may mean that enterprises do not get the expected business benefits from cloud migration. Because EDB focuses on the Postgres database, I will draw the specifics from our experiences with Postgres services, but the conclusions are equally valid for other open source database services. Support risk. Customers running software for production applications need support, whether they run in the cloud or on premises. Support for enterprise-level software must cover two aspects: expert advice on how to use the product correctly, especially in challenging circumstances, and quickly addressing bugs and defects that impact production or the move to production. For commercial software, a minimal level of support is bundled with the license. Open source databases don’t come with a license. This opens the door for a cloud database provider to create and operate a database service without investing sufficiently in the open source community to address bugs and provide support.
Customers can evaluate a cloud database provider’s ability to support their cloud migration by checking the open source software release notes and identifying team members who actively participate in the project. For example, for Postgres, the release notes are freely available, and they name every individual who has contributed new features or bug fixes. Other open source communities follow similar practices. Open source cloud database providers that are not actively involved in the development and bug fixing process cannot provide both aspects of support—advice and rapid response to problems—which presents a significant risk to cloud migration.
Service Risk. Databases are complex software products. Many users need expert advice and hands-on assistance to configure databases correctly to achieve optimal performance and high availability, especially when moving from familiar on-premises deployments to the cloud. Cloud database providers that do not offer consultative and expert professional services to facilitate this move introduce risk into the process. Such providers ask the customer to assume the responsibilities of a general contractor and to coordinate between the DBaaS provider and potential professional services providers. Instead of a single entity they can consult to help them achieve a seamless deployment with the required performance and availability levels, they get caught in the middle, having to coordinate and mitigate issues between vendors. Customers can reduce this risk by making sure they clearly understand who is responsible for the overall success of their deployment, and that this entity is indeed in a position to execute the entire project successfully. Technology stagnation risk. The shared responsibility model is a key component of a DBaaS. While the user handles schema definition and query tuning, the cloud database provider applies minor version updates and major version upgrades. Not all providers are committed to upgrading in a timely manner—and some can lag significantly. At the time of this writing, one of the major Postgres DBaaS providers lags the open source community by almost three years in their deployment of Postgres versions. While DBaaS providers can selectively backport security fixes, a delayed application of new releases can put customers in a situation where they miss out on new database capabilities, sometimes for years. Customers need to inspect a provider’s historical track record of applying upgrades to assess this exposure. A similar risk is introduced when a proprietary cloud database provider tries to create their own fork or version of well-known open source software. Sometimes this is done to optimize the software for the cloud environment or address license restrictions. Forked versions can deviate significantly from the better-known parent or fall behind the open source version. Well-known examples of such forks or proprietary versions are Aurora Postgres (a Postgres derivative), Amazon DocumentDB (with MongoDB compatibility), and Amazon OpenSearch Service (originally derived from Elasticsearch). Users need to be careful when adopting cloud-specific versions or forks of open source software. Capabilities can deviate over time, and the cloud database provider may or may not adopt the new capabilities of the open source version. Cost risk. Leading cloud database services have not experienced meaningful direct price increases. However, there is a growing understanding that the nature of cloud services can drive significant cost risk, especially in the case of self-service and rapid elasticity combined with an intransparent cost model. In on-premises environments, database administrators (DBAs) and developers must optimize code to achieve performance with the available hardware. In the cloud, it can be much more expedient to ask the cloud provider to increase provisioned input/output operations per second (IOPS), compute, or memory to optimize performance. As each increase instance drives up cost, such a short-term fix is likely to have long-lasting negative cost impacts.  Users mitigate the cost risk in two ways: (1) close supervision of the increases of IOPS, CPU, and memory to make sure they are balanced against the cost of application optimization; (2) scrutiny of the cost models of DBaaS providers to identify and avoid vendors with complex and unpredictable cost models. Lock-in risk. Cloud database services can create a “Hotel California” effect, where data cannot easily leave the cloud again, in several ways. While data egress cost is often mentioned, general data gravity and the integration with other cloud-specific tools for data management and analysis are more impactful. Data gravity is a complex concept that, at a high level, purports that once a business data set is available on a cloud platform, more applications likely will be deployed using the data on that platform, which in turn makes it less likely that the data can be moved elsewhere without significant business impact. Cloud-specific tools are also a meaningful driver for lock-in. All cloud platforms provide convenient and proprietary data management and analysis tools. While they help derive business value quickly, they also create lock-in. Users can mitigate the cloud lock-in effect by carefully avoiding the use of proprietary cloud tools and by making sure they only use DBaaS solutions that support efficient data replication to other clouds. Planning for risk. Moving databases to the cloud is undoubtedly a target for many organizations, but doing so is not risk-free. Businesses need to fully investigate and understand potential weaknesses of cloud database providers in the areas of support, services, technology stagnation, cost, and lock-in. While these risks are not a reason to shy away from the cloud, it’s important to address them up front, and to understand and mitigate them as part of a carefully considered cloud migration strategy. This content was produced by EDB. It was not written by MIT Technology Review’s editorial staff.

A new era for data: What’s possible with as-a-service

By incloudhosting.co.uk

In association withDell Technologies For organizations in today’s complex business environment, data is like water—essential for survival. They need to process, analyze, and act on data to drive business growth—to predict future trends, identify new business opportunities, and respond to market changes faster. Not enough data? Businesses die of thirst. Dirty data? Projects are polluted by “garbage in/garbage out.” Too much data for the organization’s analytical capabilities? Businesses can drown in the data flood in their struggle to tap its potential. A new era for data: What’s possible with as-a-service But the right amount of data, clean and properly channeled, can quench a business’s thirst for insights, power its growth, and carry it to success, says Matt Baker, senior vice president of corporate strategy at Dell Technologies. Like water, data is not good or bad. The question is whether it’s useful for the purpose at hand. “What’s difficult is getting the data to align properly, in an inclusive way, in a common format,” Baker says. “It has to be purified and organized in some way to make it usable, secure, and reliable in creating good outcomes.” Many organizations are overwhelmed by data, according to a recently commissioned study of more than 4,000 decision-makers conducted on Dell Technologies’ behalf by Forrester Consulting.1 During the past three years, 66% have seen an increase in the amount of data they generate—sometimes doubling or even tripling—and 75% say demand for data within their organizations has also increased.
The research company IDC estimates that the world generated 64.2 zettabytes of data in 2020, and that number is growing at 23% per year. A zettabyte is a trillion gigabytes—to put that in perspective, that’s enough storage for 60 billion video games or 7.5 trillion MP3 songs. The Forrester study showed that 70% of business leaders are accumulating data faster than they can effectively analyze and use it. Although executives have enormous amounts of data, they don’t have the means to extract insights or value from it—what Baker calls the “Ancient Mariner” paradox, after the famous line from Samuel Taylor Coleridge’s epic poem, “Water, water everywhere and not a drop to drink.”
Data streams turn to data floods  It’s easy to see why the amount and complexity of data are growing so fast. Every app, gadget, and digital transaction generates a data stream, and those streams flow together to generate even more data streams. Baker offers a potential future scenario in brick-and-mortar retailing. A loyalty app on a customer’s phone tracks her visit to an electronics store. The app uses the camera or a Bluetooth proximity sensor to understand where it is and taps the information the retailer already has about the customer’s demographics and past purchasing behavior to predict what she might buy. As she passes a particular aisle, the app generates a special offer on ink cartridges for the customer’s printer or an upgraded controller for her game box. It notes which offers result in sales, remembers for the next time, and adds the whole interaction to the retailer’s ever-growing pile of sales and promotion data, which then may entice other shoppers with smart targeting. Adding to the complexity is an often-unwieldy mass of legacy data. Most organizations don’t have the luxury of building data systems from scratch. They may have years’ worth of accumulated data that must be cleaned to be “potable,” Baker says. Even something as simple as a customer’s birth date could be stored in half a dozen different and incompatible formats. Multiply that “contamination” by hundreds of data fields and achieving clean, useful data suddenly seems impossible. But abandoning old data means abandoning potentially invaluable insights, Baker says. For example, historical data on warehouse stocking levels and customer ordering patterns could be pivotal for a company trying to create a more efficient supply chain. Advanced extract, transform, load capabilities—designed to tidy up disparate data sources and make them compatible—are essential tools. Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

How a Russian cyberwar in Ukraine could ripple out globally

By incloudhosting.co.uk

Russia has sent more than 100,000 soldiers to the nation’s border with Ukraine, threatening a war unlike anything Europe has seen in decades. Though there hasn’t been any shooting yet, cyber operations are already underway.  Last week, hackers defaced dozens of government websites in Ukraine, a technically simple but attention-grabbing act that generated global headlines. More quietly, they also placed destructive malware inside Ukrainian government agencies, an operation first discovered by researchers at Microsoft. It’s not clear yet who is responsible, but Russia is the leading suspect. But while Ukraine continues to feel the brunt of Russia’s attacks, government and cybersecurity experts are worried that these hacking offensives could spill out globally, threatening Europe, the United States, and beyond.  On January 18, the US Cybersecurity and Infrastructure Security Agency (CISA) warned critical infrastructure operators to take “urgent, near-term steps” against cyber threats, citing the recent attacks against Ukraine as a reason to be on alert for possible threats to US assets. The agency also pointed to two cyberattacks from 2017, NotPetya and WannaCry, which both spiraled out of control from their initial targets, spread rapidly around the internet, and impacted the entire world at a cost of billions of dollars. The parallels are clear: NotPetya was a Russian cyberattack targeting Ukraine during a time of high tensions. “Aggressive cyber operations are tools that can be used before bullets and missiles fly,” says John Hultquist, head of intelligence for the cybersecurity firm Mandiant. “For that exact reason, it’s a tool that can be used against the United States and allies as the situation further deteriorates. Especially if the US and its allies take a more aggressive stance against Russia.”
That looks increasingly possible. President Joe Biden said during a press conference January 19 that the US could respond to future Russian cyberattacks against Ukraine with its own cyber capabilities, further raising the specter of conflict spreading.  “My guess is he will move in,” Biden said when asked if he thought Russia’s President Vladimir Putin would invade Ukraine.
Unintentional consequences? The knock-on effects for the rest of the world might not be limited to  intentional reprisals by Russian operatives. Unlike old-fashioned war, cyberwar is not confined by borders and can more easily spiral out of control. Ukraine has been on the receiving end of aggressive Russian cyber operations for the last decade and has suffered invasion and military intervention from Moscow since 2014. In 2015 and 2016, Russian hackers attacked Ukraine’s power grid and turned out the lights in the capital city of Kyiv— unparalleled acts that haven’t been carried out anywhere else before or since.  The 2017 NotPetya cyberattack, once again ordered by Moscow, was directed initially at Ukrainian private companies before it spilled over and destroyed systems around the world.  NotPetya masqueraded as ransomware, but in fact it was a purely destructive and highly viral piece of code. The destructive malware seen in Ukraine last week, now known as WhisperGate, also pretended to be ransomware while aiming to destroy key data that renders machines inoperable. Experts say WhisperGate is “reminiscent” of NotPetya, down to the technical processes that achieve destruction, but that there are notable differences. For one, WhisperGate is less sophisticated and is not designed to spread rapidly in the same way. Russia has denied involvement, and no definitive link points to Moscow. NotPetya incapacitated shipping ports and left several giant multinational corporations and government agencies unable to function. Almost anyone who did business with Ukraine was affected because the Russians secretly poisoned software used by everyone who pays taxes or does business in the country.  The White House said the attack caused more than $10 billion in global damage and deemed it “the most destructive and costly cyberattack in history.” Since 2017, there has been ongoing debate about whether the international victims were merely unintentional collateral damage or whether the attack targeted companies doing business with Russia’s enemies. What is clear is that it can happen again.  Accident or not, Hultquist anticipates that we will see cyber operations from Russia’s military intelligence agency GRU, the organization behind many of the most aggressive hacks of all time, both inside and outside Ukraine. The GRU’s most notorious hacking group, dubbed Sandworm by experts, is responsible for a long list of greatest hits including the 2015 Ukrainian power grid hack, the 2017 NotPetya hacks, interference in US and French elections, and the Olympics opening ceremony hack in the wake of a Russian doping controversy left the country excluded from the games.  Hultquist is also looking out for another group, known to experts as Berserk Bear, that originates from the Russian intelligence agency FSB. In 2020, US officials warned of the threat the group poses to government networks. The German government said the same group had achieved “longstanding compromises” at companies as they targeted energy, water, and power sectors.  “These guys have been going after this critical infrastructure for a long, a long time now, almost a decade,” says Hultquist. “Even though we’ve caught them on many occasions, it’s reasonable to assume that they still have access in certain areas.” A sophisticated toolbox There is serious debate about the calculus inside Russia and what kind of aggression Moscow would want to undertake outside of Ukraine.  “I think it’s pretty likely that the Russians will not target our own systems, our own critical infrastructure,” said Dmitri Alperovitch, a longtime expert on Russian cyber activity and founder of the Silverado Policy Accelerator in Washington. “The last thing they’ll want to do is escalate a conflict with the United States in the midst of trying to fight a war with Ukraine.” No one fully understands what goes into Moscow’s math in this fast-moving situation. American leadership now predicts that Russia will invade Ukraine. But Russia has demonstrated repeatedly that, when it comes to cyber, they have a large and varied toolbox. Sometimes they use it for something as relatively simple but effective as a disinformation campaign, intended to destabilize or divide adversaries. They’re also capable of developing and deploying some of the most complex and aggressive cyber operations in the world. In 2014, as Ukraine plunged into another crisis and Russia invaded Crimea, Russian hackers secretly recorded the call of a US diplomat frustrated with European inaction who said “Fuck the EU” to a colleague. They leaked the call online in an attempt to sow chaos in the West’s alliances as a prelude to intensifying information operations by Russia.  Leaks and disinformation have continued to be important tools for Moscow. US and European elections have been plagued repeatedly by cyber-enabled disinformation at Russia’s direction. At a moment of more fragile alliances and complicated political environments in Europe and the United States, Putin can achieve important goals by shaping public conversation and perception as war in Europe looms. “These cyber incidents can be nonviolent, they are reversible, and most of the consequences are in perception,” says Hultquist. “They corrode institutions, they make us look insecure, they make governments look weak. They often don’t rise to the level that would provoke an actual physical, military response. I believe these capabilities are on the table.”

Meta’s new learning algorithm can teach AI to multi-task

By incloudhosting.co.uk

If you can recognize a dog by sight, then you can probably recognize a dog when it is described to you in words. Not so for today’s artificial intelligence. Deep neural networks have become very good at identifying objects in photos and conversing in natural language, but not at the same time: there are AI models that excel at one or the other, but not both.  Part of the problem is that these models learn different skills using different techniques. This is a major obstacle for the development of more general-purpose AI, machines that can multi-task and adapt. It also means that advances in deep learning for one skill often do not transfer to others. A team at Meta AI (previously Facebook AI Research) wants to change that. The researchers have developed a single algorithm that can be used to train a neural network to recognize images, text, or speech. The algorithm, called Data2vec, not only unifies the learning process but performs at least as well as existing techniques in all three skills. “We hope it will change the way people think about doing this type of work,” says Michael Auli, a researcher at Meta AI. The research builds on an approach known as self-supervised learning, in which neural networks learn to spot patterns in data sets by themselves, without being guided by labeled examples. This is how large language models like GPT-3 learn from vast bodies of unlabeled text scraped from the internet, and it has driven many of the recent advances in deep learning. Auli and his colleagues at Meta AI had been working on self-supervised learning for speech recognition. But when they looked at what other researchers were doing with self-supervised learning for images and text, they realized that they were all using different techniques to chase the same goals.
Data2vec uses two neural networks, a student and a teacher. First, the teacher network is trained on images, text, or speech in the usual way, learning an internal representation of this data that allows it to predict what it is seeing when shown new examples. When it is shown a photo of a dog, it recognizes it as a dog. The twist is that the student network is then trained to predict the internal representations of the teacher. In other words, it is trained not to guess that it is looking at a photo of a dog when shown a dog, but to guess what the teacher sees when shown that image.
Because the student does not try to guess the actual image or sentence but, rather, the teacher’s representation of that image or sentence, the algorithm does not need to be tailored to a particular type of input. Data2vec is part of a big trend in AI toward models that can learn to understand the world in more than one way. “It’s a clever idea,” says Ani Kembhavi at the Allen Institute for AI in Seattle, who works on vision and language. “It’s a promising advance when it comes to generalized systems for learning.” An important caveat is that although the same learning algorithm can be used for different skills, it can only learn one skill at a time. Once it has learned to recognize images, it must start from scratch to learn to recognize speech. Giving an AI multiple skills at once is hard, but that’s something the Meta AI team wants to look at next.   The researchers were surprised to find that their approach actually performed better than existing techniques at recognizing images and speech, and performed as well as leading language models on text understanding. Mark Zuckerberg is already dreaming up potential metaverse applications. “This will all eventually get built into AR glasses with an AI assistant,” he posted to Facebook today. “It could help you cook dinner, noticing if you miss an ingredient, prompting you to turn down the heat, or more complex tasks.” For Auli, the main takeaway is that researchers should step out of their silos. “Hey, you don’t need to focus on one thing,” he says. “If you have a good idea, it might actually help across the board.”

All charges against China Initiative defendant Gang Chen have been dismissed

By incloudhosting.co.uk

The US Justice Department has filed a motion to dismiss all charges against MIT mechanical engineering professor and nanotechnologist Gang Chen, nearly one year to the day after he was indicted on charges relating to his alleged failure to disclose relationships and funding from Chinese entities. From the start, Chen had maintained his innocence, while MIT had indicated that Chen was working to establish a research collaboration on behalf of the institution and that the funding in question was actually for the university rather than Chen personally. MIT also paid for his defense. (MIT Technology Review is funded by the university but remains editorially independent.) “Today’s dismissal of the criminal charges against Gang Chen is a result of our continued investigation,” US Attorney for the District of Massachusetts Rachael Rollins said in a statement after the filing. “Through that effort, we recently obtained additional information pertaining to the materiality of Professor Chen’s alleged omissions in the context of the grant review process at issue in this case. After a careful assessment of this new information in the context of all the evidence, our office has concluded that we can no longer meet our burden of proof at trial.” “The government finally acknowledged what we said all along: Professor Gang Chen is an innocent man,” Robert Fisher, Chen’s defense attorney, said in a statement. “Our defense was never based on any legal technicalities. Gang did not commit any of the offenses he was charged with. Full stop. He was never in a talent program. He was never an overseas scientist for Beijing. He disclosed everything that he was supposed to disclose and never lied to the government or anyone else.” To support MIT Technology Review’s journalism, please consider becoming a subscriber. The China Initiative Chen was one of the most high-profile scientists charged under the China Initiative, a Justice Department program launched under the Trump administration to counter economic espionage and national security threats from the People’s Republic of China.
Despite its stated purpose, an investigation by MIT Technology Review found that the initiative has increasingly focused on prosecuting academics for research integrity issues—hiding ties or funding from Chinese entities on grant or visa forms—rather than industrial spies stealing trade secrets. Only 19 of 77 cases (25%) identified by MIT Technology Review alleged violations of the Economic Espionage Act, while 23 cases (30%) alleged grant or visa fraud by academics. Our reporting has also found that the initiative has disproportionately affected scientists of Chinese heritage, who make up 130 (88%) of the 148 individuals charged under the initiative.
Chen’s is the eighth research integrity case to be dismissed before trial. Last month, Harvard professor Charles Lieber was found guilty on six charges of false statements and tax fraud, while the trial of University of Tennessee–Knoxville professor Anming Hu, the first research integrity case to go before a jury, ended first in a mistrial and then a full acquittal. Research Integrity cases from MIT Technology Review’s China Initiative Database A catalyzing case Chen’s indictment raised awareness of, and opposition to, the initiative because of both his prominence in his field and the seemingly routine activities for which he was being prosecuted, including collaborating with a Chinese university at the behest of his home institution. “We are all Gang Chen,” a group of MIT faculty wrote at the time, expressing both their support for their colleague and their concerns about how their own activities could draw government scrutiny. “The end of the criminal case is tremendous news for Professor Chen, and his defense team deserves accolades for their work,” says Margaret Lewis, a law professor at Seton Hall University who has written about the China Initiative. “But let’s not forget that he was first questioned at the airport two years ago and indicted one year ago. The human cost is intense even when charges are dropped.” She adds: “I am hopeful that the Justice Department will soon move beyond announcements regarding the review of individual cases to a broader statement ending the China Initiative.” “Rebranding the China Initiative will not be enough,” says Patrick Toomey, a senior staff attorney with the American Civil Liberties Union’s National Security Project, which has represented two prominent researchers erroneously charged before the China Initiative was announced in 2018. “The Justice Department must fundamentally reform its policies that enable racial profiling in the name of national security.” It is not just academics and civil rights groups that are speaking out. Over the past year, criticism of the initiative has ramped up from all sides. Ninety members of Congress have requested that Attorney General Merrick Garland investigate concerns about racial profiling, and former DOJ officials have advocated for a change in direction as well. John Demers, the former head of the Justice Department division that oversees the initiative, reportedly favored a proposal for amnesty programs that would allow researchers to disclose previously undisclosed ties with no fear of prosecution. Meanwhile, in response to MIT Technology Review’s reporting, Andrew Lelling, the former US District Attorney for Massachusetts who brought charges against Chen, argued that the part of the program targeting academics should be shut down. Six more research integrity cases remain pending, with four scheduled to go to trial this spring. Some kind of announcement may be coming soon: DOJ spokesman Wyn Hornbuckle told MIT Technology Review in an email last week that the Justice Department is “reviewing our approach to countering threats posed by the PRC government“ and anticipates “completing the review and providing additional information in the coming weeks.“ Additional reporting by Jess Aloe. This story has been updated with a statement from Rachael Rollins, the US Attorney for the District of Massachusetts.

This group of tech firms just signed up to a safer metaverse

By incloudhosting.co.uk

The internet can feel like a bottomless pit of the worst aspects of humanity. So far, there’s little indication that the metaverse—an envisioned virtual digital world where we work, play, and live—will be much better. As I reported last month, a beta tester in Meta’s virtual social platform, Horizon Worlds, has already complained of being groped. Tiffany Xingyu Wang feels she has a solution. In August 2020—more than a year before Facebook announced it would change its name to Meta and shift its focus from its flagship social media platform to plans for its own metaverse—Wang launched the nonprofit Oasis Consortium, a group of game firms and online companies that envisions “an ethical internet where future generations trust they can interact, co-create, and exist free from online hate and toxicity.”  How? Wang thinks that Oasis can ensure a safer, better metaverse by helping tech companies self-regulate. Earlier this month, Oasis released its User Safety Standards, a set of guidelines that include hiring a trust and safety officer, employing content moderation, and integrating the latest research in fighting toxicity. Companies that join the consortium pledge to work toward these goals. ​​“I want to give the web and metaverse a new option,” says Wang, who has spent the past 15 years working in AI and content moderation. “If the metaverse is going to survive, it has to have safety in it.” She’s right: the technology’s success is tied to its ability to ensure that users don’t get hurt. But can we really trust that Silicon Valley’s companies will be able to regulate themselves in the metaverse?
A blueprint for a safer metaverse The companies that have signed on to Oasis thus far include gaming platform Roblox, dating company Grindr, and video game giant Riot Games, among others. Between them they have hundreds of millions of users, many of whom are already actively using virtual spaces.  Notably, however, Wang hasn’t yet talked with Meta, arguably the biggest player in the future metaverse. Her strategy is to approach Big Tech “when they see the meaningful changes we’re making at the forefront of the movement.” (Meta pointed me to two documents when asked about its plans for safety in the metaverse: a press release detailing partnerships with groups and individuals for “building the metaverse responsibly,” and a blog post about keeping VR spaces safe. Both were written by Meta CTO Andrew Bosworth.) 
To support MIT Technology Review’s journalism, please consider becoming a subscriber. Wang says she hopes to ensure transparency in a few ways. One is by creating a grading system to ensure that the public knows where a company stands in maintaining trust and safety, not unlike the system by which many restaurants showcase city grades for meeting health and cleanliness standards. Another is by requiring member companies to employ a trust and safety officer. This position has become increasingly common in larger firms, but there’s no agreed set of standards by which each trust and safety officer must abide, Wang says.  But much of Oasis’s plan remains, at best, idealistic. One example is a proposal to use machine learning to detect harassment and hate speech. As my colleague Karen Hao reported last year, AI models either give hate speech too much chance to spread or overstep. Still, Wang defends Oasis’s promotion of AI as a moderating tool. “AI is as good as the data gets,” she says. “Platforms share different moderation practices, but all work toward better accuracies, faster reaction, and safety by design prevention.” The document itself is seven pages long and outlines future goals for the consortium. Much of it reads like a mission statement, and Wang says that the first several months’ work have centered on creating advisory groups to help create the goals.  Other elements of the plan, such as its content moderation strategy, are vague. Wang says she would like companies to hire a diverse set of content moderators so they can understand and combat harassment of people of color and those who identify as non-male. But the plan offers no further steps toward achieving this goal. The consortium will also expect member companies to share data on which users are being abusive, which is important in identifying repeat offenders. Participating tech companies will partner with nonprofits, government agencies, and law enforcement to help create safety policies, Wang says. She also plans for Oasis to have a law enforcement response team, whose job it will be to notify police about harassment and abuse. But it remains unclear how the task force’s work with law enforcement will differ from the status quo. Balancing privacy and safety Despite the lack of concrete details, experts I spoke to think that the consortium’s standards document is a good first step, at least. “It’s a good thing that Oasis is looking at self-regulation, starting with the people who know the systems and their limitations,” says Brittan Heller, a lawyer specializing in technology and human rights.  It’s not the first time tech companies have worked together in this way. In 2017, some agreed to exchange information freely with the Global Internet Forum to Combat Terrorism. Today, GIFCT remains independent, and companies that sign on to it self-regulate. Lucy Sparrow, a researcher at the School of Computing and Information Systems at the University of Melbourne, says that what’s going for Oasis is that it offers companies something to work with, rather than waiting for them to come up with the language themselves or wait for a third party to do that work. Sparrow adds that baking ethics into design from the start, as Oasis pushes for, is admirable and that her research in multiplayer game systems shows it makes a difference. “Ethics tends to get pushed to the sidelines, but here, they [Oasis] are encouraging thinking about ethics from the beginning,” she says. But Heller says that ethical design might not be enough. She suggests that tech companies retool their terms of service, which have been criticized heavily for taking advantage of consumers without legal expertise.  Sparrow agrees, saying she’s hesitant to believe that a group of tech companies will act in consumers’ best interest. “It really raises two questions,” she says. “One, how much do we trust capital-driven corporations to control safety? And two, how much control do we want tech companies to have over our virtual lives?”  It’s a sticky situation, especially because users have a right to both safety and privacy, but those needs can be in tension. For example, Oasis’s standards include guidelines for lodging complaints with law enforcement if users are harassed. If a person wants to file a report now, it’s often hard to do so, because for privacy reasons, platforms often aren’t recording what’s going on. This change would make a big difference in the ability to discipline repeat offenders; right now, they can get away with abuse and harassment on multiple platforms, because those platforms aren’t communicating with each other about which users are problematic. Yet Heller says that while this is a great idea in theory, it’s hard to put in practice, because companies are obliged to keep user information private according to the terms of service. “How can you anonymize this data and still have the sharing be effective?” she asks. “What would be the threshold for having your data shared? How could you make the process of sharing information transparent and user removals appealable? Who would have the authority to make such decisions?” “There is no precedent for companies sharing information [with other companies] about users who violate terms of service for harassment or similar bad behavior, even though this often crosses platform lines,” she adds.  Better content moderation—by humans—could stop harassment at the source. Yet Heller isn’t clear on how Oasis plans to standardize content moderation, especially between a text-based medium and one that is more virtual. And moderating in the metaverse will come with its own set of challenges. “The AI-based content moderation in social media feeds that catches hate speech is primarily text-based,” Heller says. “Content moderation in VR will need to primarily track and monitor behavior—and current XR [virtual and augmented reality] reporting mechanisms are janky, at best, and often ineffective. It can’t be automated by AI at this point.” That puts the burden of reporting abuse on the user—as the Meta groping victim experienced. Audio and video are often also not recorded, making it harder to establish proof of an assault. Even among those platforms recording audio, Heller says, most retain only snippets, making context difficult if not impossible to understand. Wang emphasized that the User Safety Standards were created by a safety advisory board, but they are all members of the consortium—a fact that made Heller and Sparrow queasy. The truth is, companies have never had a great track record for protecting consumer health and safety in the history of the internet; why should we expect anything different now? Sparrow doesn’t think we can. “The point is to have a system in place so justice can be enacted or signal what kind of behaviors are expected, and there are consequences for those behaviors that are out of line,” she says. That might mean having other stakeholders and everyday citizens involved, or some kind of participatory governance that allows users to testify and act as a jury. One thing’s for sure, though: safety in the metaverse might take more than a group of tech companies promising to watch out for us.