Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favo(u)rite topic for articles?

  • /dev/random
  • Hardware
  • Software
  • News
  • OS
  • Science
  • Security
  • Other - please expand in the comments, or give your least favo(u)rite instead

[ Results | Polls ]
Comments:23 | Votes:57

posted by Fnord666 on Thursday April 10, @02:18AM   Printer-friendly
from the bark-at-the-moon dept.

The dire wolf has been extinct for over 10,000 years. These two wolves were brought back from extinction:

The dire wolf, an animal that has been extinct for over 10,000 years has nearly come back after scientists at Colossal Biosciences were able to edit the DNA of a more modern wolf to appear and have the features of the dire wolf, a type of wolf that was made famous by "Game of Thrones."

Colossal Biosciences posted to X with a video clip of two small wolf cubs barking, "You're hearing the first howl of a dire wolf in over 10,000 years. Meet Romulus and Remus—the world's first de-extinct animals, born on October 1, 2024."

[...] The Dallas-based company, which has put on challenges to bring back the dodo bird as well as the woolly mammoth, was able to obtain DNA from fossils of dire wolves in 2021 and then edit the DNA of grey wolves in order to weave the key features of the dire wolf in with the grey wolf cubs. The embryos were edited and placed into a surrogate wolf-mother. Three wolves were born as a result, two male and one female, the New York Times reported.

From the AP News report:

Colossal scientists learned about specific traits that dire wolves possessed by examining ancient DNA from fossils. The researchers studied a 13,000 year-old dire wolf tooth unearthed in Ohio and a 72,000 year-old skull fragment found in Idaho, both part of natural history museum collections.

Then the scientists took blood cells from a living gray wolf and used CRISPR to genetically modify them in 20 different sites, said Colossal's chief scientist Beth Shapiro. They transferred that genetic material to an egg cell from a domestic dog. When ready, embryos were transferred to surrogates, also domestic dogs, and 62 days later the genetically engineered pups were born.


Original Submission

posted by Fnord666 on Wednesday April 09, @09:33PM   Printer-friendly
from the retro dept.

https://arstechnica.com/gadgets/2025/04/fire-up-your-compaq-deskpro-freedos-1-4-is-the-first-stable-update-since-2022/

We're used to updating Windows, macOS, and Linux systems at least once a month (and usually more), but people with ancient DOS-based PCs still get to join in the fun every once in a while. Over the weekend, the team that maintains FreeDOS officially released version 1.4 of the operating system, containing a list of fixes and updates that have been in the works since the last time a stable update was released in 2022.
[...]
The release has "a focus on stability" and includes an updated installer, new versions of common tools like fdisk, and format and the edlin text editor.
[...]
Hall talked with Ars about several of these changes when we interviewed him about FreeDOS in 2024. The team issued the first release candidate for FreeDOS 1.4 back in January.
[...]
The standard install image includes all the files and utilities you need for a working FreeDOS install, and a separate "BonusCD" download is also available for those who want development tools, the OpenGEM graphical interface, and other tools.


Original Submission

posted by Fnord666 on Wednesday April 09, @04:48PM   Printer-friendly

People think the em dash is a dead giveaway you used AI – are they right?

ChatGPT is rapidly changing how we write, how we work – and maybe even how we think. So it makes sense that it stirs up strong emotions and triggers an instinct to figure out what's real and what's not.

But on LinkedIn, the hunt for AI-generated content has gone full Voight-Kampff. According to some, there's now a surefire way to spot ChatGPT use: the em dash.

Yes, the punctuation mark officially defined by the width of one "em." A favorite of James Joyce, Stephen King, and Emily Dickinson. A piece of punctuation that's been around since at least the 1830s. So why is it suddenly suspicious? Is it really an AI tell or punctuation paranoia?

Rebecca Harper, Head of Content Marketing at auditing compliance platform ISMS.online, doesn't think so: "I find the idea that it's some kind of AI tell ridiculous. If we start policing good grammar out of fear of AI, we're only making human writing worse!"

She's right. The em dash isn't some fringe punctuation mark. Sure, it's used less often than its siblings – the en dash and the humble hyphen – and it's more common in the US than the UK. But that doesn't make it automatically suspicious.

Robert Andrews, a Senior Editor, explains that this is a difference in style rather than a smoking gun: "It's not just a marker of AI, but of US English and AP Style. It's quite alien to UK journalism training and style, at least my own, albeit long ago. But increasingly encountered in AP Style environments - (or –, or —) unsurprising that this would flow into LLMs."

[...] Still, because it's slightly less common in some circles, people have latched onto it as a tell. Chris McNabb, Chief Technology Officer at eGroup Communications, makes this case: "I think it's a strong indicator, especially when you see it being used often by one person. Typically most people aren't going to long press the dash key to even use the en dash BUT AI such as ChatGPT uses it by default in a lot of cases. So yes when you do see an em dash particularly more than one in a message it's a pretty safe bet for a majority of posts."

So now, some people are actively scrubbing their em dashes to avoid suspicion. Editors, marketers, and content folks are switching them out for commas or full stops just to avoid being mistaken for a ChatGPT user.

[...] Maybe we'll look back on this moment and laugh. Or cringe. Maybe the AI bubble will burst, and human-made content will feel valuable again. Or maybe AI will become so deeply embedded, so seamless, that trying to tell the difference will feel quaint.

Until then, let's stop blaming punctuation. Because what we're really afraid of isn't the em dash. It's the slow, creeping erosion of what's real. And honestly? It's painful to live in fear. Isn't it?

I find this LinkedIn-based paranoia all very amusing, as I have been using all of these punctuations for years – nay, decades – in my technical writing. I seriously doubt that this means I am a machine—or does it??


Original Submission

posted by hubie on Wednesday April 09, @12:11PM   Printer-friendly
from the the-information-manager-from-hell dept.

Author and developer Scott Chacon has reflected that twenty years, as of April 7th, Linus Torvalds made the first commit to Git, the free and open source distributed version control system which he was building at the time. Linus has long since passed the baton onward. As a developer tool, Git is known for its quirks and idiosyncrasies as much as its ability to handle everything from small to very large projects with speed and efficiency.

Over these last 20 years, Git went from a small, simple, personal project to the most massively dominant version control system ever built.

I have personally had a hell of a ride on this particular software roller coaster.

I started using Git for something you might not imagine it was intended for, only a few months after it's first commit. I then went on to found GitHub, write arguably the most widely read book on Git, build the official website of the project, start the annual developer conference, etc - this little project has changed the world of software development, but more personally, it has massively changed the course of my life.

I thought it would be fun today, as the Git project rolls into it's third decade, to remember the earliest days of Git and explain a bit why I find this project so endlessly fascinating.

Although Git is often used as part of a set of services like those provided by Codeberg, Gitlab, and others more or less infamous it is perfectly easy to run it in-house. Either way it has become virtually synonymous with version control. Over the years, Git has gradually pushed aside its predecessors and even many (if not all) of its contemporary competitors.

Previously:
(2024) Beyond Git: How Version Control Systems Are Evolving For Devops
(2022) Give Up GitHub: The Time Has Come!
(2017) Git 2.13 Released


Original Submission

posted by hubie on Wednesday April 09, @07:28AM   Printer-friendly
from the we-know-what-you-look-like dept.

The following are top facial recognition companies packaging technology to simplify identity verification for businesses, consumers and government: Cognitec, Sensory, iProov, HyperVerge, Clarifai, Amazon Rekognition, there are many others. They use one or a combination of traditional algorithms, deep learning, optical and infrared sensors, 3D scans, other technology and of course hybrids of the many approaches.

Mother Jones is known for long political stories, this one is based on a successful facial recognition company, Clearview, and how they got their technology widely deployed (and highly remunerated) in a short time, and the underlying political ideology that drove the developers in their mission:

an interesting idea...The United States of America was founded on the idea that all men are created equal. And Curtis simply asked a question, as I remember it: 'What if they're not? What do you do?...How do you govern that?'...That's what we talked about all the time."

Clearview is riding a wave of demand in the sea of identity tracking technology, and they don't look likely to wipe out anytime soon:

Since Clearview's existence first came to light in 2020, the secretive company has attracted outsize controversy for its dystopian privacy implications. Corporations like Macy's allegedly used Clearview on shoppers, according to legal records; law enforcement has deployed it against activists and protesters; and multiple government investigations have found federal agencies' use of the product failed to comply with privacy requirements. Many local and state law enforcement agencies now rely on Clearview as a tool in everyday policing, with almost no transparency about how they use the tech. "What Clearview does is mass surveillance, and it is illegal," the privacy commissioner of Canada said in 2021. In 2022, the ACLU settled a lawsuit with Clearview for allegedly violating an Illinois state law that prohibits unauthorized biometric harvesting. Data protection authorities in France, Greece, Italy, and the Netherlands have also ruled that the company's data collection practices are illegal. To date, they have fined Clearview around $100 million.

It's amazing what impact a small group of technology oriented people can have in today's society.


Original Submission

posted by janrinok on Wednesday April 09, @02:42AM   Printer-friendly

https://spectrum.ieee.org/sound-waves

A group of international researchers have developed a way to use sound to generate different types of wave patterns on the surface of water and use those patterns to precisely control and steer floating objects. Though still in the early stages of developing this technique, the scientists say further research could lead to generating wave patterns to help corral and clean up oil spills and other pollutants. Further out, at the micrometer scale, light waves based on the research could be used to manipulate cells for biological applications; and by scaling up the research to generate water waves hundreds of times larger using mechanical means, optimally designed water wave patterns might be used to generate electricity.

The team conducted laboratory experiments where they generated topological wave structures such as vortices, where the water swirls around a center point; Möbius strips that cause the water to twist and loop around in a circle; and skyrmions, where the waves twist and turn in 3D space.

"We were able to use these patterns to control the movement of objects as small as a grain of rice to as large as a ping-pong ball, which has never been done before," says Yijie Shen, an assistant professor at Nanyang Technology University in Singapore who co-led the research. "Some patterns can act like invisible tweezers to hold an object in place on the water, while other patterns caused the objects to move along circular or spiral paths."

Commenting on the findings, Usama Kadri, a reader in applied and computational mathematics at Cardiff University in Wales, noted that, "The research is conceptually innovative and represents a significant development in using sound to generate water waves."Kadri, who is researching the effects of acoustic-gravity waves (sound waves influenced by gravity and buoyancy) added, "The findings can be a bridge between disciplines such as fluid dynamics, wave physics, and topological field theory, and open up a new way for remote manipulation and trapping of particles of different sizes."

The lab set-up consisted of carefully designed 3D-printed plastic structures based on computer simulations, including a hexagonal structure and a ring-shaped structure, and each is partially submerged in a tank of water. Rubber tubing from individual off-the-shelf speakers was is attached to precisely sited nozzles protruding from the tops of the devices structures and were are used to deliver a continuous low frequency 6.8 hertz sound to the hexagonal device, or a 9 hertz sound to the ring device. The sounds cause the surface of the water to oscillate and create desired wave patterns. A particular sound's amplitude, phase, and frequency can be adjusted using a laptop computer, so that when the waves meet and combine in the tank, they create the complex patterns that have previously been worked out using computer simulations. The findings were published in February in Nature.

The wave patterns apply forces similar to those seen in optical and acoustic systems, including gradient forces that change in intensity, and which can attract objects towards the strongest part of the wave, like leaves moving to the center of a whirlpool; and radiation pressure that pushes objects in the same direction the wave is moving.

"The wave patterns we generated are topological and stable, so they keep their shape even when there is some disturbance in the water," says Shen. "This is something we want to study further to better understand what's happening."


Original Submission

posted by janrinok on Tuesday April 08, @09:55PM   Printer-friendly

No one can seem to agree on what an AI agent is:

Silicon Valley is bullish on AI agents. OpenAI CEO Sam Altman said agents will "join the workforce" this year. Microsoft CEO Satya Nadella predicted that agents will replace certain knowledge work. Salesforce CEO Marc Benioff said that Salesforce's goal is to be "the number one provider of digital labor in the world" via the company's various "agentic" services.

But no one can seem to agree on what an AI agent is, exactly.

In the last few years, the tech industry has boldly proclaimed that AI "agents" — the latest buzzword — are going to change everything. In the same way that AI chatbots like OpenAI's ChatGPT gave us new ways to surface information, agents will fundamentally change how we approach work, claim CEOs like Altman and Nadella.

That may be true. But it also depends on how one defines "agents," which is no easy task. Much like other AI-related jargon (e.g. "multimodal," "AGI," and "AI" itself), the terms "agent" and "agentic" are becoming diluted to the point of meaninglessness.

That threatens to leave OpenAI, Microsoft, Salesforce, Amazon, Google, and the countless other companies building entire product lineups around agents in an awkward place. An agent from Amazon isn't the same as an agent from Google or any other vendor, and that's leading to confusion — and customer frustration.

[...] So why the chaos?

Well, agents — like AI — are a nebulous thing, and they're constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI's Operator, Google's Project Mariner, and Perplexity's shopping agent — and their capabilities are all over the map.

Rich Villars, GVP of worldwide research at IDC, noted that tech companies "have a long history" of not rigidly adhering to technical definitions.

"They care more about what they are trying to accomplish" on a technical level, Villars told TechCrunch, "especially in fast-evolving markets."

But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai.

"The concepts of AI 'agents' and 'agentic' workflows used to have a technical meaning," Ng said in a recent interview, "but about a year ago, marketers and a few big companies got a hold of them."

[...] "Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes," Rowan said. "This can result in varied interpretations of what AI agents should deliver, potentially complicating project goals and results. Ultimately, while the flexibility can drive creative solutions, a more standardized understanding would help enterprises better navigate the AI agent landscape and maximize their investments."

Unfortunately, if the unraveling of the term "AI" is any indication, it seems unlikely the industry will coalesce around one definition of "agent" anytime soon — if ever.


Original Submission

posted by janrinok on Tuesday April 08, @05:12PM   Printer-friendly
from the marching-morons dept.

The Overpopulation Project has an English translation of Frank Götmark's short essay which explores the idea that Homo sapiens is an invasive specie. The essay was originally published on March 30th in Svenska Dagbladet and has been very slightly modified.

An invasive species can be defined as an alien, non-native species that spreads and causes various forms of damage. Such species are desirable to regulate and, in the best case, eliminate from a country. But compared to our population growth they are a minor problem, at least in Sweden and many European countries. In North America and Australia, they are a larger problem. But again, they cause a lot less damage than Homo sapiens, who is in any case the cause of their spread.

Invasive species tend to appear near buildings and infrastructure; for example, on roadsides and other environments that are easily colonized, or in the sea via ballast in ships. It is often difficult to draw boundaries in time and space for invasive species. For example, in Sweden several species came in via seeds in agriculture during the 19th century and became common, such as certain weeds.

The idea has been explored before, for example back in 2015 by Scientific American. It's also relevant to note that the global population might be underestimated substantially.

Previously:
(2019) July 11 is World Population Day
(2016) Bioethicist: Consider Having Fewer Children in the Age of Climate Change
(2015) Poll Shows Giant Gap Between what US Public and Scientists Think
(2014) The Climate-Change Solution No One Will Talk About


Original Submission

posted by janrinok on Tuesday April 08, @12:23PM   Printer-friendly

NASA's Webb Exposes Complex Atmosphere of Starless Super-Jupiter - NASA Science:

An international team of researchers has discovered that previously observed variations in brightness of a free-floating planetary-mass object known as SIMP 0136 must be the result of a complex combination of atmospheric factors, and cannot be explained by clouds alone.

Using NASA's James Webb Space Telescope to monitor a broad spectrum of infrared light emitted over two full rotation periods by SIMP 0136, the team was able to detect variations in cloud layers, temperature, and carbon chemistry that were previously hidden from view.

The results provide crucial insight into the three-dimensional complexity of gas giant atmospheres within and beyond our solar system. Detailed characterization of objects like these is essential preparation for direct imaging of exoplanets, planets outside our solar system, with NASA's Nancy Grace Roman Space Telescope, which is scheduled to begin operations in 2027.

SIMP 0136 is a rapidly rotating, free-floating object roughly 13 times the mass of Jupiter, located in the Milky Way just 20 light-years from Earth. Although it is not classified as a gas giant exoplanet — it doesn't orbit a star and may instead be a brown dwarf — SIMP 0136 is an ideal target for exo-meteorology: It is the brightest object of its kind in the northern sky. Because it is isolated, it can be observed with no fear of light contamination or variability caused by a host star. And its short rotation period of just 2.4 hours makes it possible to survey very efficiently.

Prior to the Webb observations, SIMP 0136 had been studied extensively using ground-based observatories and NASA's Hubble and Spitzer space telescopes.

"We already knew that it varies in brightness, and we were confident that there are patchy cloud layers that rotate in and out of view and evolve over time," explained Allison McCarthy, doctoral student at Boston University and lead author on a study published today in The Astrophysical Journal Letters. "We also thought there could be temperature variations, chemical reactions, and possibly some effects of auroral activity affecting the brightness, but we weren't sure."

To figure it out, the team needed Webb's ability to measure very precise changes in brightness over a broad range of wavelengths.

Using NIRSpec (Near-Infrared Spectrograph), Webb captured thousands of individual 0.6- to 5.3-micron spectra — one every 1.8 seconds over more than three hours as the object completed one full rotation. This was immediately followed by an observation with MIRI (Mid-Infrared Instrument), which collected hundreds of spectroscopic measurements of 5- to 14-micron light — one every 19.2 seconds, over another rotation.

The result was hundreds of detailed light curves, each showing the change in brightness of a very precise wavelength (color) as different sides of the object rotated into view.

"To see the full spectrum of this object change over the course of minutes was incredible," said principal investigator Johanna Vos, from Trinity College Dublin. "Until now, we only had a little slice of the near-infrared spectrum from Hubble, and a few brightness measurements from Spitzer."

The team noticed almost immediately that there were several distinct light-curve shapes. At any given time, some wavelengths were growing brighter, while others were becoming dimmer or not changing much at all. A number of different factors must be affecting the brightness variations.

"Imagine watching Earth from far away. If you were to look at each color separately, you would see different patterns that tell you something about its surface and atmosphere, even if you couldn't make out the individual features," explained co-author Philip Muirhead, also from Boston University. "Blue would increase as oceans rotate into view. Changes in brown and green would tell you something about soil and vegetation."

To figure out what could be causing the variability on SIMP 0136, the team used atmospheric models to show where in the atmosphere each wavelength of light was originating.

"Different wavelengths provide information about different depths in the atmosphere," explained McCarthy. "We started to realize that the wavelengths that had the most similar light-curve shapes also probed the same depths, which reinforced this idea that they must be caused by the same mechanism."

One group of wavelengths, for example, originates deep in the atmosphere where there could be patchy clouds made of iron particles. A second group comes from higher clouds thought to be made of tiny grains of silicate minerals. The variations in both of these light curves are related to patchiness of the cloud layers.

A third group of wavelengths originates at very high altitude, far above the clouds, and seems to track temperature. Bright "hot spots" could be related to auroras that were previously detected at radio wavelengths, or to upwelling of hot gas from deeper in the atmosphere.

Some of the light curves cannot be explained by either clouds or temperature, but instead show variations related to atmospheric carbon chemistry. There could be pockets of carbon monoxide and carbon dioxide rotating in and out of view, or chemical reactions causing the atmosphere to change over time.

"We haven't really figured out the chemistry part of the puzzle yet," said Vos. "But these results are really exciting because they are showing us that the abundances of molecules like methane and carbon dioxide could change from place to place and over time. If we are looking at an exoplanet and can get only one measurement, we need to consider that it might not be representative of the entire planet."

This research was conducted as part of Webb's General Observer Program 3548.

See also:


Original Submission

posted by janrinok on Tuesday April 08, @07:42AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[CFD: Computational Fluid Dynamics]

CFD simulation is cut down from almost 40 hours to less than two using 1,024 Instinct MI250X accelerators paired with Epyc CPUs.

AMD processors were instrumental in achieving a new world record during a recent Ansys Fluent computational fluid dynamics (CFD) simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory (ORNL). According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly.

Frontier was once the fastest supercomputer in the world, and it was also the first one to break into exascale performance. It replaced the Summit supercomputer, which was decommissioned in November 2024. However, the El Capitan supercomputer, located at the Lawrence Livermore National Laboratory, broke Frontier’s record at around the same time. Both Frontier and El Capitan are powered by AMD GPUs, with the former boasting 9,408 AMD EPYC processors and 37,632 AMD Instinct MI250X accelerators. On the other hand, the latter uses 44,544 AMD Instinct MI300A accelerators.

Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia’s market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

“By scaling high-fidelity CFD simulation software to unprecedented levels with the power of AMD Instinct GPUs, this collaboration demonstrates how cutting-edge supercomputing can solve some of the toughest engineering challenges, enabling breakthroughs in efficiency, sustainability, and innovation,” said Brad McCredie, AMD Senior Vice President for Data Center Engineering.

Even though AMD can deliver top-tier performance at a much cheaper price than Nvidia, many AI data centers prefer Team Green because of software issues with AMD’s hardware.

One high-profile example was Tiny Corp’s TinyBox system, which had problems with instability with its AMD Radeon RX 7900 XTX graphics cards. The problem was so bad that Dr. Lisa Su had to step in to fix the issues. And even though it was purportedly fixed, the company still released two versions of the TinyBox AI accelerator — one powered by AMD and the other by Nvidia. Tiny Corp also recommended the more expensive Team Green version, with its six RTX 4090 GPUs, because of its driver quality.

If Team Red can fix the software support on its great hardware, then it could likely get more customers for its chips and get a more even footing with Nvidia in the AI GPU market.


Original Submission

posted by hubie on Tuesday April 08, @02:56AM   Printer-friendly
from the having-rings-is-cool dept.

Earth Had Saturn-Like Rings 466 Million Years Ago, New Study Suggests

The temporary structure likely consisted of debris from a broken-up asteroid:

Earth may have sported a Saturn-like ring system 466 million years ago, after it captured and wrecked a passing asteroid, a new study suggests.

The debris ring, which likely lasted tens of millions of years, may have led to global cooling and even contributed to the coldest period on Earth in the past 500 million years.

That's according to a fresh analysis of 21 crater sites around the world that researchers suspect were all created by falling debris from a large asteroid between 488 million and 443 million years ago, an era in Earth's history known as the Ordovician during which our planet witnessed dramatically increased asteroid impacts.

A team led by Andy Tomkins, a professor of planetary science at Monash University in Australia, used computer models of how our planet's tectonic plates moved in the past to map out where the craters were when they first formed over 400 million years ago. The team found that all the craters had formed on continents that floated within 30 degrees of the equator, suggesting they were created by the falling debris of a single large asteroid that broke up after a near-miss with Earth.

"Under normal circumstances, asteroids hitting Earth can hit at any latitude, at random, as we see in craters on the moon, Mars and Mercury," Tomkins wrote in The Conversation. "So it's extremely unlikely that all 21 craters from this period would form close to the equator if they were unrelated to one another."

The chain of crater locations all hugging the equator are consistent with a debris ring orbiting Earth, scientists say. That's because such rings typically form above planets' equators, as occurs with those circling Saturn, Jupiter, Uranus and Neptune. The chances that these impact sites were created by unrelated, random asteroid strikes is about 1 in 25 million, the new study found.

[...] "Over millions of years, material from this ring gradually fell to Earth, creating the spike in meteorite impacts observed in the geological record," Tomkins added in a university statement. "We also see that layers in sedimentary rocks from this period contain extraordinary amounts of meteorite debris."

The team found that this debris, which represented a specific type of meteorite and was found to be abundant in limestone deposits across Europe, Russia and China, had been exposed to a lot less space radiation than meteorites that fall today. Those deposits also reveal signatures of multiple tsunamis during the Ordovician period, all of which can be best explained by a large, passing asteroid capture-and-break-up scenario, the researchers argue.

The new study is a "new and creative idea that explains some observations," Birger Schmitz of Lund University in Sweden told New Scientist. "But the data are not yet sufficient to say that the Earth indeed had rings."

Searching for a common signature in specific asteroids grains across the newly studied impact craters would help test the hypothesis, Schmitz added.

Earth may have had a ring system 466 million years ago

Earth may have had a ring system 466 million years ago:

In a discovery that challenges our understanding of Earth's ancient history, researchers have found evidence suggesting that Earth may have had a ring system, which formed around 466 million years ago, at the beginning a period of unusually intense meteorite bombardment known as the Ordovician impact spike.

This surprising hypothesis, published today in Earth and Planetary Science Letters, stems from plate tectonic reconstructions for the Ordovician period noting the positions of 21 asteroid impact craters. All these craters are located within 30 degrees of the equator, despite over 70 per cent of Earth's continental crust being outside this region, an anomaly that conventional theories cannot explain.

The research team believes this localised impact pattern was produced after a large asteroid had a close encounter with Earth. As the asteroid passed within Earth's Roche limit, it broke apart due to tidal forces, forming a debris ring around the planet—similar to the rings seen around Saturn and other gas giants today.

[...] "What makes this finding even more intriguing is the potential climate implications of such a ring system," he said.

The researchers speculate that the ring could have cast a shadow on Earth, blocking sunlight and contributing to a significant global cooling event known as the Hirnantian Icehouse.

This period, which occurred near the end of the Ordovician, is recognised as one of the coldest in the last 500 million years of Earth's history.

"The idea that a ring system could have influenced global temperatures adds a new layer of complexity to our understanding of how extra-terrestrial events may have shaped Earth's climate," Professor Tomkins said.

Normally, asteroids impact the Earth at random locations, so we see impact craters distributed evenly over the Moon and Mars, for example. To investigate whether the distribution of Ordovician impact craters is non-random and closer to the equator, the researchers calculated the continental surface area capable of preserving craters from that time.

They focused on stable, undisturbed cratons with rocks older than the mid Ordovician period, excluding areas buried under sediments or ice, eroded regions, and those affected by tectonic activity. Using a GIS approach (Geographic Information System), they identified geologically suitable regions across different continents. Regions like Western Australia, Africa, the North American Craton, and small parts of Europe were considered well-suited for preserving such craters. Only 30 per cent of the suitable land area was determined to have been close to the equator, yet all the impact craters from this period were found in this region. The chances of this happening are like tossing a three-sided coin (if such a thing existed) and getting tails 21 times.

The implications of this discovery extend beyond geology, prompting scientists to reconsider the broader impact of celestial events on Earth's evolutionary history. It also raises new questions about the potential for other ancient ring systems that could have influenced the development of life on Earth.

Could similar rings have existed at other points in our planet's history, affecting everything from climate to the distribution of life? This research opens a new frontier in the study of Earth's past, providing new insights into the dynamic interactions between our planet and the wider cosmos.

Journal Reference: https://doi.org/10.1016/j.epsl.2024.118991


Original Submission #1Original Submission #2

posted by Fnord666 on Monday April 07, @10:11PM   Printer-friendly
from the just-wait-for-the-GNU/QNodeOS-sniping-to-begin dept.

Operating system for quantum networks is a first:

Researchers in the Netherlands, Austria, and France have created what they describe as the first operating system for networking quantum computers. Called QNodeOS, the system was developed by a team led by Stephanie Wehner at Delft University of Technology. The system has been tested using several different types of quantum processor and it could help boost the accessibility of quantum computing for people without an expert knowledge of the field.

In the 1960s, the development of early operating systems such as OS/360 and UNIX represented a major leap forward in computing. By providing a level of abstraction in its user interface, an operating system enables users to program and run applications, without having to worry about how to reconfigure the transistors in the computer processors. This advance laid the groundwork for the many of the digital technologies that have revolutionized our lives.

"If you needed to directly program the chip installed in your computer in order to use it, modern information technologies would not exist," Wehner explains. "As such, the ability to program and run applications without needing to know what the chip even is has been key in making networks like the Internet actually useful."

The users of nascent quantum computers would also benefit from an operating system that allows quantum (and classical) computers to be connected in networks. Not least because most people are not familiar with the intricacies of quantum information processing.

However, quantum computers are fundamentally different from their classical counterparts, and this means a host of new challenges faces those developing network operating systems.

"These include the need to execute hybrid classical–quantum programs, merging high-level classical processing (such as sending messages over a network) with quantum operations (such as executing gates or generating entanglement)," Wehner explains.

Within these hybrid programs, quantum computing resources would only be used when specifically required. Otherwise, routine computations would be offloaded to classical systems, making it significantly easier for developers to program and run their applications.

[...] In addition, Wehner's team considered that, unlike the transistor circuits used in classical systems, quantum operations currently lack a standardized architecture – and can be carried out using many different types of qubits.

Wehner's team addressed these design challenges by creating a QNodeOS, which is a hybridized network operating system. It combines classical and quantum "blocks", that provide users with a platform for performing quantum operations.

[...] QNodeOS is still a long way from having the same impact as UNIX and other early operating systems. However, Wehner's team is confident that QNodeOS will accelerate the development of future quantum networks.

"It will allow for easier software development, including the ability to develop new applications for a quantum Internet," she says. "This could open the door to a new area of quantum computer science research."


Original Submission

posted by Fnord666 on Monday April 07, @05:26PM   Printer-friendly
from the AI-boostery dept.

Slashdot also featured this story, via bleepingcomputer.com summary. The original story is here: https://www.microsoft.com/en-us/security/blog/2025/03/31/analyzing-open-source-bootloaders-finding-vulnerabilities-faster-with-ai/

At first I thought this would be an advert for Microsoft Copilot tacked onto a tale of security hounds doing their stuff with vulnerabilities in GRUB2, but it does seem that AI saved some time for the investigators, and the article is worth a read.

Here is my summary:

"By leveraging Microsoft Security Copilot to expedite the vulnerability discovery process, Microsoft Threat Intelligence uncovered several vulnerabilities in multiple open-source bootloaders, impacting all operating systems relying on Unified Extensible Firmware Interface (UEFI) Secure Boot as well as IoT devices. The vulnerabilities found in the GRUB2 bootloader (commonly used as a Linux bootloader) and U-boot and Barebox bootloaders (commonly used for embedded systems), could allow threat actors to gain and execute arbitrary code.

Using Security Copilot, we were able to identify potential security issues in bootloader functionalities, focusing on filesystems due to their high vulnerability potential. This approach saved our team approximately a week's worth of time that would have otherwise been spent manually reviewing the content. Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability.

[...] Through a combination of static code analysis tools (such as CodeQL), fuzzing the GRUB2 emulator (grub-emu) with AFL++, manual code analysis, and using Microsoft Security Copilot, we have uncovered several vulnerabilities.

Using Security Copilot, we initially explored which functionalities in a bootloader have the most potential for vulnerabilities, with Copilot identifying network, filesystems, and cryptographic signatures as key areas of interest. Given our ongoing analysis of network vulnerabilities and the fact that cryptography is largely handled by UEFI, we decided to focus on filesystems.

Using the JFFS2 filesystem code as an example, we prompted Copilot to find all potential security issues, including exploitability analysis. Copilot identified multiple security issues, which we refined further by requesting Copilot to identify and provide the five most pressing of these issues. In our manual review of the five identified issues, we found three were false positives, one was not exploitable, and the remaining issue, which warranted our attention and further investigation, was an integer overflow vulnerability."


Original Submission

posted by hubie on Monday April 07, @12:41PM   Printer-friendly
from the snooper's-charter-2? dept.

Arthur T Knackerbracket has processed the following story:

The UK's technology secretary revealed the full breadth of the government's Cyber Security and Resilience (CSR) Bill for the first time this morning, pledging £100,000 ($129,000) daily fines for failing to act against specific threats under consideration.

Slated to enter Parliament later this year, the CSR bill was teased in the King's Speech in July, shortly after the Labour administration came into power. The gist of it was communicated at the time – to strengthen the NIS 2018 regulations and future-proof the country's most critical services from cyber threats – and Peter Kyle finally detailed the plans for the bill at length today.

Kyle said the CSR bill comprises three key pillars: Expanding the regulations to bring more types of organization into scope; handing regulators greater enforcement powers; and ensuring the government can change the regulations quickly to adapt to evolving threats.

Additional amendments are under consideration and may add to the confirmed pillars by the time the legislation makes its way through official procedures. These include bringing datacenters into scope, publishing a unified set of strategic objectives for all regulators, and giving the government the power to issue ad-hoc directives to in-scope organizations.

The latter means the government would be able to order regulated entities to make specific security improvements to counter a certain threat or ongoing incident, and this is where the potential fines come in.

If, for example, a managed service provider (MSP) – a crucial part of the IT supply chain – failed to patch against a widely exploited vulnerability within a time frame specified by a government order, and was then hit by attacks, it could face daily fines of £100,000 or 10 percent of turnover for each day the breach continues.

"Resilience is not improving at the rate necessary to keep pace with the threat and this can have serious real-world impacts," said Kyle. "The government's legislative plan for cyber security will address the vulnerabilities in our cyber defenses to minimize the impact of attacks and improve the resilience of our critical infrastructure, services, and digital economy."

[...] The third pillar – giving the government the authority to flexibly adapt the regulations as new threats emerge – is the lesser known of the three and wasn't really referred to in the King's Speech.

This could bring even more organizations into scope quickly, change regulators' responsibilities where necessary, or introduce new requirements for in-scope entities.

[...] In revealing the bill's details today, the tech secretary said the UK continues to face "unprecedented threats" to CNI, citing various attacks that plagued the country in recent times. Synnovis, Southern Water, local authorities, and those in the US and Ukraine all got a mention, and that's just scratching the surface of the full breadth of recent attacks.

Kyle said in an interview with The Telegraph that shortly after the UK's Labour party was elected, he was briefed by the country's spy chiefs about the threat to critical services – a session that left him "deeply concerned" over the state of cybersecurity.

"I was really quite shocked at some of the vulnerabilities that we knew existed and yet nothing had been done," he said.

[...] However, William Richmond-Coggan, partner of dispute management at legal eagle Freeths, warned:

"Even if every organization that the new rules are directed to had the budget, technical capabilities and leadership bandwidth to invest in updating their infrastructure to meet the current and future wave of cyber threats, it is likely to be a time consuming and costly process bringing all of their systems into line.

"And with an ever evolving cyber threat profile, those twin investments of time and budget need to be incorporated as rolling commitments – achieving a cyber secure posture is not a 'one and done'. Of at least equal importance is the much needed work of getting individuals employed in these nationally important organisations to understand that cyber security is only as strong as its weakest link, and that everyone has a role to play in keeping such organisations safe."


Original Submission

posted by hubie on Monday April 07, @07:56AM   Printer-friendly
from the good-advice dept.

Cell Phone OPSEC for Border Crossings - Schneier on Security:

Cell Phone OPSEC for Border Crossings

I have heard stories of more aggressive interrogation of electronic devices at US border crossings. I know a lot about securing computers, but very little about securing phones.

Are there easy ways to delete data—files, photos, etc.—on phones so it can't be recovered? Does resetting a phone to factory defaults erase data, or is it still recoverable? That is, does the reset erase the old encryption key, or just sever the password that access that key? When the phone is rebooted, are deleted files still available?

We need answers for both iPhones and Android phones. And it's not just the US; the world is going to become a more dangerous place to oppose state power.

Posted on April 1, 2025 at 7:01 AM56 Comments

See also: Yes, border control can go through your phone. Here's what travelers should know.


Original Submission