This group of tech firms just signed up to a safer metaverse

By incloudhosting.co.uk

The internet can feel like a bottomless pit of the worst aspects of humanity. So far, there’s little indication that the metaverse—an envisioned virtual digital world where we work, play, and live—will be much better. As I reported last month, a beta tester in Meta’s virtual social platform, Horizon Worlds, has already complained of being groped. Tiffany Xingyu Wang feels she has a solution. In August 2020—more than a year before Facebook announced it would change its name to Meta and shift its focus from its flagship social media platform to plans for its own metaverse—Wang launched the nonprofit Oasis Consortium, a group of game firms and online companies that envisions “an ethical internet where future generations trust they can interact, co-create, and exist free from online hate and toxicity.”  How? Wang thinks that Oasis can ensure a safer, better metaverse by helping tech companies self-regulate. Earlier this month, Oasis released its User Safety Standards, a set of guidelines that include hiring a trust and safety officer, employing content moderation, and integrating the latest research in fighting toxicity. Companies that join the consortium pledge to work toward these goals. ​​“I want to give the web and metaverse a new option,” says Wang, who has spent the past 15 years working in AI and content moderation. “If the metaverse is going to survive, it has to have safety in it.” She’s right: the technology’s success is tied to its ability to ensure that users don’t get hurt. But can we really trust that Silicon Valley’s companies will be able to regulate themselves in the metaverse?
A blueprint for a safer metaverse The companies that have signed on to Oasis thus far include gaming platform Roblox, dating company Grindr, and video game giant Riot Games, among others. Between them they have hundreds of millions of users, many of whom are already actively using virtual spaces.  Notably, however, Wang hasn’t yet talked with Meta, arguably the biggest player in the future metaverse. Her strategy is to approach Big Tech “when they see the meaningful changes we’re making at the forefront of the movement.” (Meta pointed me to two documents when asked about its plans for safety in the metaverse: a press release detailing partnerships with groups and individuals for “building the metaverse responsibly,” and a blog post about keeping VR spaces safe. Both were written by Meta CTO Andrew Bosworth.) 
To support MIT Technology Review’s journalism, please consider becoming a subscriber. Wang says she hopes to ensure transparency in a few ways. One is by creating a grading system to ensure that the public knows where a company stands in maintaining trust and safety, not unlike the system by which many restaurants showcase city grades for meeting health and cleanliness standards. Another is by requiring member companies to employ a trust and safety officer. This position has become increasingly common in larger firms, but there’s no agreed set of standards by which each trust and safety officer must abide, Wang says.  But much of Oasis’s plan remains, at best, idealistic. One example is a proposal to use machine learning to detect harassment and hate speech. As my colleague Karen Hao reported last year, AI models either give hate speech too much chance to spread or overstep. Still, Wang defends Oasis’s promotion of AI as a moderating tool. “AI is as good as the data gets,” she says. “Platforms share different moderation practices, but all work toward better accuracies, faster reaction, and safety by design prevention.” The document itself is seven pages long and outlines future goals for the consortium. Much of it reads like a mission statement, and Wang says that the first several months’ work have centered on creating advisory groups to help create the goals.  Other elements of the plan, such as its content moderation strategy, are vague. Wang says she would like companies to hire a diverse set of content moderators so they can understand and combat harassment of people of color and those who identify as non-male. But the plan offers no further steps toward achieving this goal. The consortium will also expect member companies to share data on which users are being abusive, which is important in identifying repeat offenders. Participating tech companies will partner with nonprofits, government agencies, and law enforcement to help create safety policies, Wang says. She also plans for Oasis to have a law enforcement response team, whose job it will be to notify police about harassment and abuse. But it remains unclear how the task force’s work with law enforcement will differ from the status quo. Balancing privacy and safety Despite the lack of concrete details, experts I spoke to think that the consortium’s standards document is a good first step, at least. “It’s a good thing that Oasis is looking at self-regulation, starting with the people who know the systems and their limitations,” says Brittan Heller, a lawyer specializing in technology and human rights.  It’s not the first time tech companies have worked together in this way. In 2017, some agreed to exchange information freely with the Global Internet Forum to Combat Terrorism. Today, GIFCT remains independent, and companies that sign on to it self-regulate. Lucy Sparrow, a researcher at the School of Computing and Information Systems at the University of Melbourne, says that what’s going for Oasis is that it offers companies something to work with, rather than waiting for them to come up with the language themselves or wait for a third party to do that work. Sparrow adds that baking ethics into design from the start, as Oasis pushes for, is admirable and that her research in multiplayer game systems shows it makes a difference. “Ethics tends to get pushed to the sidelines, but here, they [Oasis] are encouraging thinking about ethics from the beginning,” she says. But Heller says that ethical design might not be enough. She suggests that tech companies retool their terms of service, which have been criticized heavily for taking advantage of consumers without legal expertise.  Sparrow agrees, saying she’s hesitant to believe that a group of tech companies will act in consumers’ best interest. “It really raises two questions,” she says. “One, how much do we trust capital-driven corporations to control safety? And two, how much control do we want tech companies to have over our virtual lives?”  It’s a sticky situation, especially because users have a right to both safety and privacy, but those needs can be in tension. For example, Oasis’s standards include guidelines for lodging complaints with law enforcement if users are harassed. If a person wants to file a report now, it’s often hard to do so, because for privacy reasons, platforms often aren’t recording what’s going on. This change would make a big difference in the ability to discipline repeat offenders; right now, they can get away with abuse and harassment on multiple platforms, because those platforms aren’t communicating with each other about which users are problematic. Yet Heller says that while this is a great idea in theory, it’s hard to put in practice, because companies are obliged to keep user information private according to the terms of service. “How can you anonymize this data and still have the sharing be effective?” she asks. “What would be the threshold for having your data shared? How could you make the process of sharing information transparent and user removals appealable? Who would have the authority to make such decisions?” “There is no precedent for companies sharing information [with other companies] about users who violate terms of service for harassment or similar bad behavior, even though this often crosses platform lines,” she adds.  Better content moderation—by humans—could stop harassment at the source. Yet Heller isn’t clear on how Oasis plans to standardize content moderation, especially between a text-based medium and one that is more virtual. And moderating in the metaverse will come with its own set of challenges. “The AI-based content moderation in social media feeds that catches hate speech is primarily text-based,” Heller says. “Content moderation in VR will need to primarily track and monitor behavior—and current XR [virtual and augmented reality] reporting mechanisms are janky, at best, and often ineffective. It can’t be automated by AI at this point.” That puts the burden of reporting abuse on the user—as the Meta groping victim experienced. Audio and video are often also not recorded, making it harder to establish proof of an assault. Even among those platforms recording audio, Heller says, most retain only snippets, making context difficult if not impossible to understand. Wang emphasized that the User Safety Standards were created by a safety advisory board, but they are all members of the consortium—a fact that made Heller and Sparrow queasy. The truth is, companies have never had a great track record for protecting consumer health and safety in the history of the internet; why should we expect anything different now? Sparrow doesn’t think we can. “The point is to have a system in place so justice can be enacted or signal what kind of behaviors are expected, and there are consequences for those behaviors that are out of line,” she says. That might mean having other stakeholders and everyday citizens involved, or some kind of participatory governance that allows users to testify and act as a jury. One thing’s for sure, though: safety in the metaverse might take more than a group of tech companies promising to watch out for us.

Tonga’s volcano blast cut it off from the world. Here’s what it will take to get it reconnected.

By incloudhosting.co.uk

Hunga Tonga–Hunga Ha‘apai, an underwater volcano off the coast of Tonga, has erupted several times in the last 13 years, but the most recent, on January 15, was likely its most destructive. The blast has had global consequences: more than 6,000 miles away, waves caused by the eruption drowned two people in Peru. But the effect of the volcanic blast on Tongans living closer to ground zero isn’t yet known, though it’s feared that the ensuing tsunami may have killed many people and displaced many more from their homes. That’s because Tonga has been suddenly cut off from the internet, making it that much harder to coordinate aid or rescue missions. In a highly interconnected world, Tonga is now completely dark, and it’s almost impossible to get word out. Getting the country back online is vital—but it could take weeks. To support MIT Technology Review’s journalism, please consider becoming a subscriber. Internet traffic plunged to near-nothing around 5:30 p.m. local time on January 15, according to data from web performance firm Cloudflare. That connection hasn’t yet been restored, says Doug Madory of Kentik, an internet observatory company, who has been monitoring the country’s web traffic. The reason Tonga fell offline isn’t yet known for certain, but initial investigations have suggested that the undersea cable connecting its internet to the rest of the world has been destroyed by the blast. “Tonga primarily uses a single subsea cable to connect to the internet,” says Madory. The Tonga Cable System runs 514 miles between Tonga and Fiji, bringing internet service to the two island nations. Previously, that connection has been backed up by a satellite internet connection. “I guess they’re not able to do that this time, because of some technical failure preventing them from being able to switch over,” says Madory. He believes that the wave resulting from the volcano explosion could have taken out the satellite dishes. Jamaica-based mobile network operator Digicel, which owns a minority stake in the cable alongside the Tongan government, said in a statement: “All communication to the outside world in Tonga is affected due to damage.” Southern Cross Cable, a New Zealand–based company that runs cables interconnecting with the Tonga Cable System, believes there’s a possible break around 23 miles offshore. It’s also believed that the domestic subsea cable is broken around 30 miles from Tonga’s capital, Nukuʻalofa. Such breaks are usually found by sending light down the fiber-optic core of the cabling and calculating how long it takes for the signal to bounce back—which it does when interrupted, says Christian Kaufmann, vice president of network technology at content delivery network Akamai. If that’s confirmed, it’s just about the worst possible news for Tonga’s connectivity. “It will be days—maybe weeks—before the cable is fixed,” says Madory. The outage isn’t the first time that Tonga’s internet infrastructure has been plagued with problems. In January 2019, the country experienced a “near-total” internet blackout when an undersea cable was cut. Initial reports indicated that a magnetic storm and lightning may have damaged the connection—but a subsequent investigationfound that a Turkish-flagged ship dropping anchor had severed the line. Fixing the issue cost an estimated $200,000, and while it was being fixed, the island relied on satellite internet connections. Those same satellite connections are likely to be the only savior for Tonga’s internet in the near term—but with unknown damage to them, the country could be in for a difficult period. “They were probably thinking: ‘Well, if the cable goes down, we have the satellites for resilience,’” says Madory. “If a volcano detonates right next to you and takes out both your cable and your satellite, there’s not much you can do.” Huge amounts of ash thrown up into the air by the eruption could also be affecting satellite connectivity, says Kaufmann. Fixing the broken cable won’t be easy. Specialized shipping vessels tasked with fixing breakages—which occur every week somewhere around the world, albeit with less force than is likely to have resulted from the eruption—need to be sent to the site of the problem. One vessel that could help is the CS Resilience, currently off Papua New Guinea, nearly 3,000 miles away. It’s estimated that any vessel could take days or weeks to remedy the issue. “There’s a priority over whose cable gets fixed first,” says Madory. “Countries pay a little premium to get fixed first.” Once one of these vessels arrives on scene, which itself could take days, it drops a hook to snag the cable that runs along the sea floor. The hooked cable, which when in the deep ocean can be as thin as a common garden hose, is then winched up onto the deck of the vessel, where technicians work to fix the break. “The cabling itself is not the most sturdy thing,” says Kaufmann. It’s then lowered gently back into the water. “That process hasn’t changed much in the 150 years or so that we’ve had submarine cables,” says Madory. There are, of course, compounding factors that can complicate the process. Tonga is likely to be besieged by vessels looking to deliver aid to the country, which may mean internet cabling takes a back seat to saving lives, restoring power, and delivering vital food and water supplies. The precise location of the rupture can also make things complicated: generally, the further out the break is from shore, the deeper the cable—and the harder it is to reach and drag up from the floor. That’s before considering that the onshore power lines that help keep the connection online may well be damaged beyond easy repair. “Tonga is on an extremity of the internet,” says Madory. “Once you go out from the core of the internet, you’re just going to have fewer options.” The internet outage shows how dependent the world’s internet connectivity can be on single points of failure. “It’s one of those stories that put the lie to the idea that the internet was designed to withstand nuclear wars,” says Alan Woodward, a professor of cybersecurity at the University of Surrey in the UK. “Chewing gum holds most of it together.” Woodward suggests that rare physical events such as volcanic explosions are difficult to design for, but countries should try to maintain redundancy through multiple undersea connections, and ideally ones that follow different routes so that a localized incident won’t affect multiple lines.  Yet redundancy doesn’t come cheap—especially for a small nation of just over 100,000 people like Tonga. It’s also likely that with a massive eruption such as this one, the movement of the seabed would have caused a fissure in any secondary cable, even if it was laid on the other side of Tonga.  “There’s a broader message around the resilience of infrastructure,” says Andrew Bennett, who analyzes internet policy at the Tony Blair Institute for Global Change. “Although the UK or US isn’t going to be like Tonga, increasingly there are geopolitical tensions and debate[around] discussing things like undersea cables that are pushing us into a more fractious place. You don’t want to end up in a place where you have sovereign cables for the allies and other cables for everyone else.” Bennett suggests two options to bridge the connectivity gap. One is rapid rollout of satellite internet—and the satellite constellations are being launched into space as we speak. The other is to devote more money to the problem. “If you look at resilient internet infrastructure as a public good, countries who can afford it should pay for it and provide it to others,” he says. Closing the global digital divide by 2030 would cost just 0.2% of the gross national income of OECD countries per year, according to the institute. Given that the internet is increasingly seen as a fourth vital service, alongside heat, power, and water, such a long outage for 100,000 people is a major disaster—compounding the immediate physical effects of the eruption. And it highlights the fragility of certain parts of the internet, particularly outside the rich Western world. “The internet’s not necessarily crumbling at the core,” says Woodward. “But it’s always going to be a little frayed around the edges.”