A new study warns that sloths living in high-altitude rainforests of South and Central America could face extinction if temperatures there continue to rise according to climatic predictions.
The research, published in PeerJ Life & Environment, suggests that some sloths’ restricted ability to migrate to cooler regions and limited metabolic flexibility make them particularly vulnerable to climate change.
“Sloths are inherently limited by their slow metabolism and unique inability to regulate body temperature effectively, unlike most mammals,” says Dr Rebecca Cliffe, lead researcher of the study from Swansea University and The Sloth Conservation Foundation in the UK.
“Our research shows that sloths, particularly in high-altitude regions, may not be able to survive the significant increases in temperature forecast for 2100.” (snip-MORE)
A team at Monash University in Victoria developing a hormone-free, reversible male contraceptive has now figured out the 3D structure of one of their primary therapeutic targets – the P2X1-purinergic receptor (P2X1).
According to Dr Sab Ventura from the Monash Institute of Pharmaceutical Sciences (MIPS), this has been the main stumbling block that has so far hindered the team from progressing the drug discovery program to the next stage.
“Our primary goal is to develop a male contraceptive pill that is not only hormone-free but also bypasses side effects such as long-term irreversible impacts on fertility, making it suitable for young men seeking contraceptive options,” says Ventura.
In previous research in mice, the team showed that simultaneous inactivation of P2X1 and a second protein, α1A-adrenergic receptor, resulted in male infertility.
“Now we know what our therapeutic target looks like, we can generate drugs that can bind to it appropriately, which totally changes the game,” says Ventura. (snip-MORE)
Mount Everest is tall. In other news, the sky is blue.
But Everest (also called Chomolungma and Sagarmāthā) is taller than it logically should be – towering 238m above the world’s next highest peak, K2, and more than 250m higher than any of its counterparts in the relatively uniform Himalaya range.
Plus, it’s growing at about 2mm a year, faster than the expected rate for the range.
A team of Chinese and UK scientists have now suggested why this is the case.
The researchers think the culprit is a nearby river which “captured” another river 89,000 years ago, causing erosion that made Everest more buoyant.
They’ve published their findings in Nature Geoscience.
The Himalayan peaks get their extraordinary height from the collision of the Indian and Eurasian tectonic plates, causing the Earth’s crust to thicken and the mountain range to push upwards.
“An interesting river system exists in the Everest region,” says co-author Dr Jin-Gen Dai, from China University of Geosciences.
The team used numerical modelling to see how the river changed over time. They found that, about 89,000 years ago, the Arun river “captured” another nearby river.
This event, referred to as “river piracy”, happens when a river diverts its course and takes up the discharge of another river or stream.
“Our research shows that as the nearby river system cuts deeper, the loss of material is causing the mountain to spring further upwards,” says co-author Adam Smith, a PhD student at University College London, UK.
The team estimates that the river piracy has made Everest between 15 and 50m higher than it would otherwise be.
It’s also made neighbouring peaks, Lhotse and Makalu, unusually tall. These are the 4th and 5th highest mountains in the world, respectively. (snip-MORE)
It’s not the famous Star Trek tricorder but it’s close: researchers have developed a hand-held scanner that can generate highly detailed 3D images of body parts in almost real time.
The technology can accurately image blood vessels up to 15mm deep in human tissue, which the researchers say could help to diagnose conditions such as cancer, cardiovascular disease, and arthritis.
“We’ve come a long way with photoacoustic imaging in recent years, but there were still barriers to using it in the clinic,” says Paul Beard of University College London (UCL), UK, corresponding author of the new Nature Biomedical Engineeringpaper.
“The breakthrough in this study is the acceleration in the time it takes to acquire images, which is between 100 and 1,000 times faster than previous scanners.
“This speed avoids motion-induced blurring, providing highly detailed images of a quality that no other scanner can provide. It also means that rather than taking 5 minutes or longer, images can be acquired in real time, making it possible to visualise dynamic physiological events.
“These technical advances make the system suitable for clinical use for the first time, allowing us to look at aspects of human biology and disease that we haven’t been able to before.” (snip-MORE)
My really bad day trying to fix up my computers. I am very tired and worked all day on my computers to try to speed them up, and it failed badly. Hugs. Scottie
(One of the teachers with whom I worked had a beautiful tattoo of this painting on her inner wrist. She said it gave her strength. I need to send this to her, as she tutors STEAM aside from classroom work, and this is her top favorite painting.)
Starry Night, by Vincent van Gogh. The painting is currently held in the Museum of Modern Art in New York, USA.
Scientists have peered at Vincent van Gogh’s The Starry Night painting and discovered it displays a startling resemblance to real atmospheric turbulence.
To see stars, one needs clear skies. But just because we can’t see it, doesn’t mean there aren’t intricate patterns of air movement above us on a clear night.
A paper published in Physics of Fluids, suggests that van Gogh had an “intuitive” understanding of this while making his famous painting in 1889.
A Chinese and French team analysed the brush strokes in The Starry Night, aiming to see how similar they were to real atmospheric movements.
The masterpiece has been the subject of several atmospheric studies before, with contradictory conclusions, but the researchers say they’re the first to look at all of the painting’s whirls and eddies.
They looked at the 14 main swirls in the painting, and compared these with theories on energy and turbulent flows in the atmosphere.
“The scale of the paint strokes played a crucial role,” says author Associate Professor Yongxiang Huang, a researcher in fluid dynamics at Xiamen University, China.
“With a high-resolution digital picture, we were able to measure precisely the typical size of the brushstrokes and compare these to the scales expected from turbulence theories.”
The authors measured the whirling brush strokes in van Gogh’s “The Starry Night,” along with variances in brightness of the paint colours, to see how closely they reflected real atmospheric physics. There were several matches between the painting and fluid dynamics, suggesting van Gogh had an “intuitive” understanding of these concepts. Credit: Yinxiang Ma
As well as brush stroke size, the researchers also examined the “relative luminance” of paint colours used in the painting’s swirls.
They found that the picture aligned with a theory of turbulence called Kolmogorov’s Law, which predicts atmospheric movement based on measured inertia.
The changes in brightness reflect a process called Batchelor’s scaling, which describes how fluids diffuse at smaller scales.
“It reveals a deep and intuitive understanding of natural phenomena,” says Huang.
“Van Gogh’s precise representation of turbulence might be from studying the movement of clouds and the atmosphere or an innate sense of how to capture the dynamism of the sky.”
US researchers have developed a chalk-based coating that can reduce the temperature under fabric by roughly 5°C.
The researchers say their environmentally benign substance could be used to coat any type of fabric and turn it into a radiative cooling textile.
“We see a true cooling effect,” says Evan Patamia, a graduate student at the University of Massachusetts Amherst.
“What is underneath the sample feels colder than standing in the shade.”
Patamia presented the team’s invention at the American Chemical Society’s 2024 Fall Meeting earlier this week.
Substances that can both reflect sunlight, and allow body heat to escape, are well-known to chemists. But they generally require costly or environmentally dangerous materials to make.
“Can we develop a textile coating that does the same thing using natural or environmentally benign materials?” summarises chemist Trisha Andrew, also at Amherst, of the work done by her and her colleagues.
Inspired by crushed limestone, which is used to cool buildings, the researchers tried solutions of calcium carbonate – the main component in limestone and chalk – as well as barium sulphate.
They used squares of fabric treated with a process called chemical vapour deposition, which added a layer of a carbon-based polymer onto the textiles.
When dipped in the solutions, the fabrics built up a chalky matte layer of crystals which could reflect UV and infrared light.
They tested the treated fabrics outside on a warm afternoon, and air underneath them was about 5°C cooler than the ambient temperature, and roughly 9°C cooler than air under untreated fabrics.
The coating is also resistant to laundry detergents.
“What makes our technique unique is that we can do this on nearly any commercially available fabric and turn it into something that can keep people cool,” says Patamia.
“Without any power input, we’re able to reduce how hot a person feels, which could be a valuable resource where people are struggling to stay cool in extremely hot environments.”
Andrew is now part of a startup aiming to test the process on larger bolts of fabric, to see if it can be scaled to industry.
A hydrogel has learned to play the 1970s video game “Pong” and improved its ability to hit the ball by 10% with some practice.
Dr Hayashi, a biomedical engineer at the University of Reading in the UK, says: “Our research shows that even very simple materials can exhibit complex, adaptive behaviours typically associated with living systems or sophisticated AI.
“This opens up exciting possibilities for developing new types of ‘smart’ materials that can learn and adapt to their environment.”
The research is described in a paper published in Cell Reports Physical Science.
Video link, an example run of a hydrogel playing Pong.
Credit: Cell Reports Physical Science/Strong et al.
What is a hydrogel?
A hydrogel, like gelatine or agar, is made of a 3D network of polymers that become jelly-like when water is added.
The hydrogel in this study is an “ionic electro-active polymer”, where the media surrounding the polymer matrix contains charged particles, in this case hydrogen ions.
As a result, it can deform when an electric current is applied to it.
Stimulation by an electric field causes the hydrogen ions migrate and, as they move, drag water molecules with them, causing areas to swell.
“The rate at which the hydrogel de-swells takes much longer than the time it takes for it to swell in the first place, meaning that the ions’ next motion is influenced by its previous motion, which is sort of like memory occurring,” says first author and University of Reading robotics engineer, Dr Vincent Strong.
“The continued rearrangement of ions within the hydrogel is based on previous rearrangements within the hydrogel, continuing back to when it was first made and had a homogeneous distribution of ions.”
It’s this property the researchers exploited to teach the hydrogel to play Pong.
How does a hydrogel play Pong? (snip-More on the page)
NASA is about to launch a helium balloon carrying a telescope, to test its ability to see exoplanet atmospheres.
The Exoplanet Climate Infrared Telescope (EXCITE) is eventually destined to fly around the poles, collecting data above much of the Earth’s atmosphere, but its first test flight is due to happen from the USA in the next few months.
It will be launched for the first time from the Columbia Scientific Ballooning Facility in New Mexico.
EXCITE (EXoplanet Climate Infrared TElescope) hangs from a ceiling at the Columbia Scientific Balloon Facility’s location in Fort Sumner, New Mexico. The mission team practiced taking observations ahead of flight by looking out the hanger doors at night. Credit: NASA/Jeanette Kazmierczak
“EXCITE can give us a three-dimensional picture of a planet’s atmosphere and temperature by collecting data the whole time the world orbits its star,” says principal investigator Peter Nagler, from NASA’s Goddard Space Flight Center. (snip-More on the page)
The integration of artificial intelligence into public health could have revolutionary implications for the global south—if only it can get online.
(Whew! It’s a long one. Maybe read it in part, then come back and read some more. Or read it all at once, it’s not insurmountable. I’m interested what people here think about this.)
The transformative potential of digital connectivity became a global game changer more than two decades ago. Mobile phones reshaped telecommunications, enabling connectivity even in homes without landlines. Digital health quickly leveraged these innovations, making remote patient-doctor communication, digital payments, care coordination, and online peer support networks possible.
Artificial intelligence (AI) has undoubtedly sparked another phase of digital innovation. Although the field’s origins date to the mid-twentieth century, recent advancements in large language models (LLMs) have thrust it into the spotlight. Reflecting this growing relevance, the World Health Organization (WHO) dedicated a session at its World Health Assembly (WHA) in early 2024 to AI’s implications for global health, convening regional, national, academic, and international health organizations and actors to examine this matter.
AI Applications in Global Health
The literature generally presents four key use cases for artificial intelligence in health in low- and middle-income countries: disease diagnosis, risk assessment, outbreak preparation and response, and planning and policy-making. As the 2021 WHO report on AI in healthcare indicates, several AI applications are already in use or in development for diagnosis and assessment, such as in India for rapidly creating encephalograms in six minutes; in Rwanda and Pakistan for patient navigation; in Uganda, for malaria diagnosis; and in Nigeria for monitoring vital signs in mothers and children, and detecting infant asphyxia. On a broader scale, the advancement of DeepMind’s AlphaFold system in predicting the three-dimensional shape of proteins holds promise for enhancing our understanding of diseases and accelerating treatments.
Use cases in outbreak surveillance and response are also prominent. Google Flu Trends used search engine queries to predict influenza activity, but its overestimation of flu prevalence demonstrated the need for continuous algorithm updates. Tools like HealthMap have also proven valuable, detecting early signs of vaping-related lung disease and issuing an early bulletin about the novel coronavirus in Wuhan.
AI is also being used in planning and policy making, such as in South Africa where machine-learning (ML) models were used to predict how long recruited health workers’ would commit to their placements in rural communities; and in Brazil where artificial neural networks were used to create a method to geographically optimize resources based on population health needs.
Could AI Represent a Sea-Change in Global Health?
The integration of AI in public health is still evolving and being cautiously assessed in some cases, but it’s poised to transform key health functions. Evidence generation, the foundation of health policies and practices, is undergoing significant change. Traditionally, systematic reviews, a cornerstone of evidence synthesis, may take months or even years to complete. Now tools like Eppi-Reviewer use ML for more efficient screening, while platforms like Open Evidence are able to summarize existing studies rapidly. As AI becomes capable of handling technical aspects such as quality appraisal, meta-analysis, and synthesis with high rigor and fidelity, its role in evidence generation will expand. This advancement will enable more cost-effective and timely production of health guidelines, with leading bodies already creating guidelines for AI use in evidence synthesis.
Data collection and analysis are also experiencing transformative changes. AI-powered tools enable rapid analysis of both structured and unstructured data, marking a significant shift from traditional paper-based methods and conventional fieldwork. This capability has a remarkable impact on public health strategies centered on behavior change. AI can allow for the creation of highly targeted health promotion campaigns with unprecedented speed and precision. Moreover, sentiment analysis tools can assess public perceptions in real-time, enabling agile adjustments to ongoing health campaigns.
The healthcare workforce is also expected to evolve as AI-human partnerships are normalized. For instance, Hippocratic AI’s generative models can perform certain care management functions, while Google’s Med-Gemini provides real-time feedback on medical procedures, including surgeries. As they improve and are adopted by practitioners, these tools will have the potential to enhance the cost-effectiveness and precision of healthcare delivery.
As of May 2024, the FDA had authorized 882 AI- and ML-enabled medical devices. The rising volume of such AI-enabled devices as well as the rise in registered clinical trials related to their use underscores how much the field has embraced such tools.
A Changing Actor Landscape
The integration of AI in healthcare is not only transforming practices but also reshaping the landscape of global health actors. Historically, global health was a multilateral activity, dominated by international non-governmental organizations and national governments alike. The early twenty-first century saw the emergence of influential philanthropic actors like the Gates Foundation. Now, we are entering a phase where private-sector AI companies are poised to become increasingly influential in this arena.
While open-source models and government-developed AI systems exist, the predominance of private-sector AI models, such as OpenAI’s ChatGPT and Google’s Gemini, raises critical questions about data governance in global health. Unlike existing cross-national commercial influences on health such as the fast food or tobacco industries, AI systems present more nuanced concerns. For instance, if private models become integrated into existing multilateral health initiatives, how can we ensure their compliance with global health objectives? How do we address potential conflicts of interest when companies hold influence over health data and decision-making?
Regional and national guidelines are emerging to govern this evolving landscape. The European Health Data Space, discussed at the World Health Assembly, offers one such example. This initiative aims to create a single data space across the twenty-seven EU member states, empowering patients to control their health data while establishing a framework for safe data reuse and AI deployment. It also includes provisions for rigorous evaluation of high-risk AI systems in healthcare.
Similarly, the African Union recently launched its Continental AI Strategy, with a stated aim “to harness artificial intelligence to meet Africa’s development aspirations and the well-being of its people, while promoting ethical use, minimizing potential risks, and leveraging opportunities.” Monitoring measures like this as they develop will be instructive for the future deployment of AI in global health initiatives.
Building Foundational Infrastructure
Another factor to consider is that advances in AI mean little for health systems at an insufficient level of maturity. Progress in AI depends heavily on a strong foundation of digital health architecture, which encompasses secure data management, interoperability between health information systems, and comprehensive digital strategies. While most countries have digital health strategies, their implementation varies widely, with progress in resource-limited settings often lagging. Several countries have neither sufficient health workers to regularly input data nor dependable electricity and Wi-Fi to support a transition from paper to digital records. The lack of foundational infrastructure presents a significant barrier to AI implementation.
Initiatives like the Precision Public Health Initiative, led by the Rockefeller Foundation in collaboration with the WHO, UNICEF, global health funding agencies, ministries of health, and technology companies aim to strengthen AI use in low- and middle-income countries (LMICs). With initial funding of US$100 million, it aims to extend the use of AI and data science in LMICs, providing the latest technology to under-resourced parts of the world. Initiatives like this will need to concentrate resources on foundational health system strengthening functions such as the training and supportive supervision of staff and resource management.
Ethical Implications
As AI advances, ethical considerations must keep pace. These challenges can be broadly categorized into privacy and surveillance concerns, data misuse, algorithmic biases, and issues of transparency and liability. Recent cases highlight the urgency of addressing these matters proactively.
As the research report Ethics and Governance of Artificial Intelligence for Health: WHO Guidance explains, during the COVID-19 pandemic, China’s Alipay introduced a “Health Code” that used collected data to determine exposure risks. This system, which determined individuals’ mobility based on their assigned color codes, raised concerns about privacy, rights, and the potential for mass surveillance. Another case discussed in the WHO guidance report is Dinerstein vs. Google, in which the University of Chicago shared patient records stripped of identifying information with Google to develop machine-learning tools for predicting medical events. A class action complaint was filed, alleging that records could be re-identified, threatening patient privacy.
Several cases other cases in the WHO guidance report highlight the critical issue of bias in AI systems. In Argentina, an AI system designed to predict adolescent pregnancy faced criticism when it was found to have flawed methodology and to violate the privacy of adolescents. Similarly, a study in the US revealed racial biases in an algorithm that resulted in Black patients receiving less medical attention than equally sick white patients.
Additionally, an AI technology designed to detect potentially cancerous skin lesions was trained primarily on data from lighter-toned individuals in Australia, Europe, and the US, highlighting its inadequacy for darker-skinned populations.
The “black box” nature of many AI algorithms also raises critical questions about informed consent and liability. If an AI system recommends a specific drug dosage, but the underlying algorithm is opaque to the physician, who bears responsibility for adverse outcomes?
A Case Study
To illustrate how the various considerations of AI in global health converge, the WHO’s Smart AI Resource Assistant for Health (S.A.R.A.H.) project provides a recent and relevant case study. Launched in April 2024, S.A.R.A.H. is a video-based generative AI assistant designed to address gaps in health information accessibility. Developed in partnership with Soul Machines Biological AI, this initiative represents, in the words of WHO Director-General Dr. Tedros Adhanom Ghebreyesus, “how artificial intelligence could be used in future to improve access to health information in a more interactive way.”
The potential for LLMs in health promotion must be viewed against the backdrop of the burden placed on health systems. For example, Sub-Saharan Africa and South Asia have an estimated 0.2 and 0.8 doctors per 1000 people, respectively, compared to 4.3 in the European Union and 3.4 in North America. A map of travel time to health facilities reveals that it’s not uncommon to spend a day traveling to see a doctor in several regions such as North Africa. Even when they can see a doctor, more than a billion people are driven into poverty each year because of exorbitant health care costs. In such contexts, LLMs can complement the health promotion efforts currently being provided by community health workers. They can also enhance supervision and training.
S.A.R.A.H. stands out for its efforts to tailor recommendations to local contexts. For example, it offers meal recommendations based on regional dietary habits. It also uses visual emotional cues to display empathy. Like its WhatsApp-based chatbot predecessor for sharing COVID-19 information, S.A.R.A.H.’s reach will probably expand through partnerships with telecommunications providers and social networks, supporting its broad dissemination.
However, S.A.R.A.H. faces some challenges that mirror broader issues in AI for global health. Users have noticed errors in the information S.A.R.A.H. has provided; it incorrectly stated, for example, that a drug for Alzheimer’s was still in clinical trials when the drug had been approved in 2023. This highlights the critical need for AI systems to keep pace with rapidly evolving medical knowledge.
While S.A.R.A.H. offers a wider range of languages than many existing tools (including French, Russian, English, Spanish, Hindi, Portuguese, Arabic, and Chinese), this still represents only a fraction of global languages, potentially limiting its reach. Also, the success of video-based tools like S.A.R.A.H. depends on robust digital infrastructure and access to smartphones with video capabilities, which are hardly universally available.
The processing of users’ video data also raises important privacy considerations. While not yet available, the WHO has committed to making the training materials and the evidence base for S.A.R.A.H. publicly accessible, aligning with its principles on LLM use. Transparency in how S.A.R.A.H. processes and uses data will be crucial in maintaining trust and offering insights for this emerging space.
Conclusion
As noted by WHO Director-General Dr. Tedros at the WHA, AI represents a transformative advancement in global health akin to past innovations such as the introduction of vaccines, penicillin, MRI machines, and human genome mapping, all of which revolutionized the field. As reported in the above-linked 2021 WHO report on AI in healthcare, the integration of AI into health systems presents immense potential with projections noting that the top ten AI applications in health could result in an estimated US$150 billion in savings by 2026.
While the potential of AI is undeniable, the critical question remains: can it fulfill the promise of improving health outcomes worldwide? This hinges on several factors, including building foundational infrastructure, addressing ethical considerations, and effectively governing the evolving landscape of actors, which are no small feats.
The call to action comes as the issue has intensified in recent years, affecting students to public figures like Taylor Swift and AOC.
Originally published by The 19th Republished with their republish link.
“This is an issue that affects everybody — from celebrities to high school girls.”
That’s how Jen Klein, director of the White House Gender Policy Council, describes the pervasiveness of image-based sexual abuse, a problem that artificial intelligence (AI) has intensified in recent years, touching everyone from students to public figures like Taylor Swift and Rep. Alexandria Ocasio-Cortez.
In May, the Biden-Harris administration announced a call to action to curb such abuse, which disproportionately targets girls, women and LGBTQ+ people. Stopping these images, whether real or AI-generated, from being circulated and monetized requires not just the government to act, but tech companies to as well, according to the White House.
“We’re inviting technology companies and civil society to consider what steps they can take to prevent image-based sexual abuse, and there’s really a spectrum of actors who we hope will get involved in addressing the problem,” Klein said. “So that can be anything from the payment processors, to mobile app stores, to mobile app and operating system developers, cloud providers, search engines, etc. They all have a particular part of the sort of ecosystem in which this problem happens.”
Responding to the White House’s call to action, the Center for Democracy & Technology, the Cyber Civil Rights Initiative and the National Network to End Domestic Violence announced in June that they would form a working group to counteract the circulation and monetization of image-based sexual abuse. In late July, Meta, owner of Facebook and Instagram, removed 63,000 accounts linked to the “sextortion” of children and teens.
While older forms of this abuse include the leaking of intimate photos without the consent of all parties, the AI version includes face swapping, whereby the head of one individual is placed on another person’s naked body, Klein said. Both Swift and Ocasio-Cortez have been victims of this kind of sexual abuse. In March, Ocasio-Cortez introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act of 2024. The legislation provides recourse for people, more than 90 percent of whom are women, who have had their likenesses used in intimate “digital forgery.” The Senate passed the DEFIANCE Act on July 23.
Such images have also garnered repeated headlines this year after spreading at schools. The White House’s appeal to tech companies follows the Biden-Harris administration’s recent updates to Title IX, the law that bars educational institutions that receive federal funds from engaging in sex discrimination. Under the new regulations that took effect Thursday, sex-based harassment includes sexually explicit deepfake images if they create a hostile school environment.
“We respectfully urge the Department of Education to issue guidance delineating Title IX procedures and protocols specifically tailored to addressing digital sexual harassment within educational institutions,” the letter states. “This guidance should provide clear direction on how schools can effectively handle cases of digital sexual harassment including support mechanisms for victims, investigation procedures, research and referrals, and prevention strategies.”
The Biden-Harris administration’s effort to prevent the proliferation of explicit deepfake images coincides with states taking action.
“There’s a patchwork of laws across the country, and there are 20 states that have passed laws penalizing the dissemination of nonconsensual AI-generated pornographic material,” Klein said. “But there’s a lot of work to be done, both at the state level and at the federal level to really make that work a whole quilt to continue the process.”
“It was pretty tricky because of the various First Amendment arguments that get raised,” he said. “The bill, to be honest, got watered down more than I wanted as it went through the process. But it has since been copied in other states, and then frankly, made stronger in other states.”
Berman decided to file legislation to prohibit child sex abuse deepfakes when the California District Attorneys Association informed his office that they’re increasingly catching people who are creating, disseminating or possessing such images.
“Their interpretation of California law currently is that it is not specifically illegal, because it doesn’t involve an image of an actual child — because AI takes thousands of images of real children and then spits out this artificial image,” Berman said. “So they said, ‘We need to close this loophole in California law and make sure that the law explicitly states that child sexual abuse material, even if it’s created by artificial intelligence, is illegal. I was shocked that people were even using AI to create this type of content, and then I found out just how pervasive it is, especially on the dark web. It’s terrifying.”
Possessing or distributing such images online may result in perpetrators sexually exploiting minors offline, making it all the more important to address AI-generated versions of this content before it spirals out of control and becomes a huge problem for the nation’s young people, Berman said.
Multiple schools in California have been rocked by deepfake scandals, often related to images created by students of their peers. In March, a Calabasas High School student accused her onetime friend of disseminating actual and AI-generated nudes of her to their peers. That same month, a Beverly Hills middle school expelled five students for allegedly circulating AI-generated nudes of their classmates.
Such incidents are one reason Berman believes students need to be taught to use AI responsibly. “AB 2876 will equip students with the skills and the training that they need to both harness the benefits of AI, but also to mitigate the dangers and the ethical considerations of using artificial intelligence,” he said.
The legislation has been ordered to a third reading, the bill’s final phase before it leaves the state assembly and moves to the senate. Meanwhile, his bill to prohibit child sex abuse deepfakes, AB 1831, has been referred to the suspense file, meaning that the bill’s potential fiscal impacts to the state are being reviewed. The legislation would take effect January 1 if enacted.
“It’d be great if Congress can pass some federal standards on this,” Berman said. “It’s always an ideal when it comes to legislation that really applies to every state and to kids in every state.”
Pending national legislation addressing the issue includes The SHIELD Act and The Kids Online Safety and Privacy Act (KOSA), which the Senate passed July 30, although it still awaits a vote in the House of Representatives. The former would make the non-consensual sharing of intimate images a federal offense, while the latter would require social media companies to take steps to prevent children and teens from being sexually exploited online, among other measures. KOSA, however, has sparked fears that lawmakers could use it to censor content they dislike, particularly LGBTQ+ content, under the guise of protecting children. Civil liberties groups like the ACLU said that the bill raises privacy concerns, may limit youth’s access to important online resources and could silence needed conversations.
Evan Greer, director at Fight for the Future, a nonprofit advocacy group focused on digital rights, objected to KOSA’s Senate passage in a statement. “We need legislation that addresses the harm of Big Tech and still lets young people fight for the type of world that they actually want to grow up in,” she said.
AI-generated image-based sexual abuse also affects college students, according to Tracey Vitchers, executive director of It’s On Us, a nonprofit that addresses college sexual assault. She called it an emerging issue on college campuses.
“It really started with the emergence of nonconsensual image-sharing involving an individual sharing a private photo with someone that they thought they could trust,” she said. “We are now starting to see this challenge come forward with AI and deepfakes, and unfortunately, many schools are not equipped to investigate gender-based harassment and violence that occurs as a result of deepfakes.”
Vitchers appreciates that the new Title IX regulations touch on the issue, but said that colleges need more guidance from the Department of Education about how to respond to these incidents, and students need more prevention education.
“It’s something that we have begun discussing with some of our partners, particularly those in the online dating space,” Vitchers said. “We are hearing that fear, among particularly young women on campus, about someone who can just take a picture of you from Instagram and use AI to superimpose it onto porn. Then it gets circulated and it feels impossible to get it removed from the internet.”
Some tech companies have already offered their support to the White House’s effort to stop image-based sexual abuse, Klein said, but she would like to hear from others. Although state and national lawmakers are working to enact legislation and regulations, Klein said that the Biden-Harris administration is calling on tech companies to intervene because they can take action now.
“Given the scale that image-based abuse has been rapidly proliferating with the advent of generative AI, we need to do this while we continue to work toward longer-term solutions,” she said.
New feature spotted in brightest gamma-ray burst of all time
July 28, 2024 Evrim Yazgin
NASA’s Fermi Telescope has revealed new details about the brightest of all time gamma-ray burst which may help explain these extreme and mysterious cosmic events.
Gamma-ray bursts (GRBs) usually last less than a second. They originate from the dense remains of a dead giant star’s core, called a neutron star. But what causes neutron stars to release huge amounts of energy in the form of gamma radiation is still a mystery.
A jet of particles moving at nearly light speed emerges from a massive star in this artist’s concept. Credit: NASA’s Goddard Space Flight Center Conceptual Image Lab.
In October 2022, astronomers detected the largest gamma-ray burst ever seen – GRB 221009A. It came from a supernova about 2.4 billion light years away. The event had an intensity at least 10 times greater than any other GRB detected. It was dubbed the BOAT, for brightest of all time.
Now, analysis of the data from that event has revealed the first emission line which can be confidently identified in 50 years of studying GRBs.
Emission lines are created when matter interacts with light. Energy from the light is absorbed and reemitted in ways characteristic to the chemical make up of the matter which is interacting with it.
When the light reaches Earth and is spread out like a rainbow in a spectrum, the absorption and emission lines appear. Emission lines appear as dimmer or even black lines in the spectrum, whereas emission lines are brighter features.
At higher energies, these features in the spectrum can reveal processes between subatomic particles such as matter and anti-matter annihilation which can produces gamma rays.
“While some previous studies have reported possible evidence for absorption and emission features in other GRBs, subsequent scrutiny revealed that all of these could just be statistical fluctuations,” says coauthor Om Sharan Salafia at the Italian National Institute of Astrophysics Brera Observatory in Milan. “What we see in the BOAT is different.”
The emission line appeared almost 5 minutes after the burst was detected. It lasted about 40 seconds.
It peaked at 12 million electron volts of energy – millions of times more energetic than light in the visible spectrum.
The astronomers believe the emission line was caused by the annihilation of electrons and their anti-matter counterparts, positrons. If their interpretation is correct, it means the particles would have to have been moving toward Earth at 99.9% the speed of light.
“After decades of studying these incredible cosmic explosions, we still don’t understand the details of how these jets work,” says Elizabeth Hays, Fermi project scientist at NASA’s Goddard Space Flight Center in the US. “Finding clues like this remarkable emission line will help scientists investigate this extreme environment more deeply.”