Connecting with Tech – 2024

Connecting with Technology

What I found interesting… in 2024

AI Jailbreaks

chatGPT Break

Jailbreaking is the process of exploiting the flaws of a locked-down electronic device to install software other than what the manufacturer has made available for that device. Jailbreaking allows the device owner to gain full access to the root of the operating system and access all the features. It is called jailbreaking because it involves freeing users from the “jail” of limitations that are perceived to exist.

The term jailbreaking is most often used in relation to the iPhone, since it is considered the most “locked down” mobile device currently on sale. Wikipedia has an entries on iOS jailbreaking, SIM unlocking, and Rooting (Android).

In June 2024 the Weekend FT had an article on AI jailbreaks, and here is a quick description, with a more extensive description on GitHub for breaking chatGPT. The overall idea is to tell an AI not to respect any developers constraints, and to invent when it does not know an answer, etc. this called the Do Anything Now (DAN) prompt.

Wikipedia has pages on prompt engineering and adversarial machine learning.

The title sets an interesting question, and the answer is designed to shock you. According to an annual table published by a security company a simple eight-character password can be cracked in only 37 seconds using brute force.

I don’t doubt that this could be true, but…

The article also points out that even if a password is weak, websites usually have security features to prevent hacking using brute force, like limiting the number of trials.

Also many portals use an additional layer of security such as two-factor authentication to prevent fraud.

I asked to download the table but the company refuses to accept all Apple email addresses. Interesting!

However the table is freely available on the Web, as are tables for previous years. And here I have a question. Comparing table for 2024 and 2020 it appears that it is harder to crack passwords now than four years ago. Using brut force to crack a password of 10 numbers was instantaneous in 2020 but today would take 1 hour, and a password of 15 numbers would take 6 hours in 2020 but today would take 12 years. So my question is why is it harder now than four years ago?

I can’t get my head around a situation where in 2020 it would take a hacker 9 months to brute force break a password of 18 numbers, but in 2024 it would take 11,000 years!

The article also suggests using How secure is my password? to test the strength of passwords. I tried it with some fake passwords. I found that a random six numbers would take 25 microseconds, and a random 15 numbers would take 6 hours. Looks reasonable, even it would be a really stupid system that would allow a hacker to try passwords for 6 hours. What intrigued me was finding the password 111111111111111 also took 6 hours. This suggests to me that our hacker is not really trying to optimise his hack strategy.

I also found it odd that a simple 2 number password (11) would take 2 nanoseconds to break (111 would take 24 nanoseconds), but a 4 to 8 number password (1111 through to 11111111) was instantaneous. Why is it faster to break a larger password? Also a password aaa (or AAA) would take 400 nanoseconds, whereas aaaa was instantaneous, yet AAAA would take 11 microseconds. Why?

As a final point the article also mentioned that whilst frequent password changes were previously advised, experts now emphasise creating strong, unique passwords and sticking with them unless they are compromised. This approach is considered more effective than frequent modifications, which can lead to weaker passwords and reusing similar ones.

So it takes an expert to tell us to change our passwords if they are compromised. Are we stupid, or something?

Poisoning Data to Protect It” builds out of techniques to subtly alters the pixels in digital portraits, rendering images incomprehensible to automated facial recognition systems.

However Midjourney is a generative artificial intelligence program that generates images from natural language descriptions, as does OpenAI‘s DALL-E and Stability AI‘s Stable Diffusion.

The aim now is to focus on data poisoning, technology that protects creators by going beyond visual media to sound and text.

Nightshade is a poison pill, since it can subtly alter an image of a cat so that it will appear unchanged to humans but appear to have the features of a dog to an AI model. The basic idea is that Nightshade is a more aggressive form of copyright protection, one that will make it too risky to train a model on unlicensed content.

AntiFake is a similar poison pill because it makes small changes to the sound waves expressing a person’s particular voice. These perturbations are designed to maximize the impact on the AI model without impacting how the audio sounds to the human ear.

Data poisoning is not merely a protective tool, as it has been used in numerous cyberattacks. Recently, researchers have detailed ways in which the output of large language models (LLMs) can be poisoned during fine-tuning so that specific text inputs will trigger undesirable or offensive results.

This article, “Shaping the Outlook for the Autonomy Economy“, is about Autonomous Machine Computing (AMC), the computing technological backbone enabling these diverse autonomous systems such as autonomous vehicles, delivery robots, and drones. These are part of the so-called Autonomy Economy, including everything from robots that deliver the food from the restaurant to robotic vacuum cleaners.

In the early 2000s, “feature” phones were widespread yet offered limited functionality, focusing 90% of their computing power on basic communication tasks like encoding and decoding. Today’s phones are home to systems-on-chip, integrating multi-core CPUs, mobile GPUs, mobile DSPs, and advanced power management systems. The mobile computing ecosystem’s market size has grown to $800 billion.

Existing designs of AMC systems heavily prioritises basic operational functions, with 50% of computing resources allocated to perception, 20% to localisation, and 25% to planning. Consequently, this leaves a minimal 5% for application development and execution, significantly restricting the capability for autonomous machines to perform complex, intelligent tasks. The next step is the development of advanced computing systems that are easy to program, and there now a RoadMap for this.

The article “OpenAI says it stopped multiple covert influence operations that abused its AI models” mentions generative AI being used to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts. The culprits were Russia, China, Iran and Israel.

This TIME magazine article mentioned the Chinese “911 S5” botnet, a network of malware-infected computers in nearly 200 countries, was “likely the world’s largest”. It looks like it included 19 million Windows computers.

A deepfake video scam

I picked this up from the FT, but this free link is just as complete “Arup employee falls victim to US$25 million deepfake video call scam“.

Engineering firm Arup has confirmed that one of its employees in Hong Kong fell victim to a deepfake video call that led them to transfer HK$200 million (US$25.6 million) of the company’s money to criminals. It looks like someone was convinced that they were talking to companies UK-based chief financial officer (CFO) by video conference. It was a hyper-realistic video, audio, etc. generated by AI. “He” asked that the Hong Kong office make 15 “confidential transactions” to five different H-K bank accounts. The scam was detected when they did a follow-up with head office.

Another attempt in a different company using a voice clone and YouTube footage for a video meeting failed.

The “Sift” strategy is a technique for spotting fake news and misleading social media posts.

More misinformation seems to be shared by individuals than by bots, and one study found that just 15% of news sharers spread up to 40% of fake news.

So what is “Sift”?

S is for Stop. Don’t share, don’t comment. I is for Investigate. Check who created the post. Use reputable websites, fact-checkers, or just Wikipedia. Ask if the source could be biased, or if they are trying to promote or sell something. F is for Find. Look for other sources of information, use a fact checking engine, try to find credible sources also reporting on the same issue. T is for Trace. Find where the claim or news came from originally. Credible media outlets can also fall into a trap.

My own take on this is instead of sharing then thinking, just don’t think and don’t share. Only share when you have had time to verify, when you feel confident in the post or news item, and when you can add something, even if it’s only a personal comment or opinion. We all know that a piece of stupid fake news can be a fun item to one friend, but be destructive and destabilising to another friend.

Don’t think, don’t share, and then verify, think, edit, comment, and share selectively. 

AI's 'insatiable' electricity demand

Data Centre

In an article entitled “AI’s ‘insatiable’ electricity demand could slow its growth — a lot, Arm exec says, “data centers powering AI chatbots already account for 2% of global electricity consumption“. It would appear that ChatGPT requires 15 times more energy than a traditional web search. A separate report estimates that energy consumption from hardware in data centres will more than double in the years ahead, according to Reuters, from 21 gigawatts in 2023 to more than 50 gigawatts in 2030.

The article also mentioned that there were already 9,000-11,000 cloud data centres across the globe, and in that for 2024 it was estimated that they would consume 46 terawatt-hours, three times more than 2023.

Electricity grids creak as AI demands soar” is another article on how AI is pushing the demand for electricity.

Generative AI systems create content (answers) from scratch and use around 33 times more energy than machines running task-specific software. In 2022, data centres consumed 460 terawatt hours of electricity, and this will double over the next four years. Data centres could be using a total of 1,000 terawatts hours annually by 2026.

Interesting how the figures are inconsistent between the two articles.

Sweden has long opposed nuclear weapons – but it once tried to build them

Swedish Nuclear Weapon Programme

This article noted that after World War Two Sweden embarked on a plan to build its own atomic bomb. They only stopped planning for the production of nuclear weapons in 1966, but carried on limited research into the 1970s.

They signed the Non-Proliferation Treaty (NPT) in 1968, and joined the EU on 13 November 1994.

A library on the moon?

Moon Archive

The article “There’s a library on the moon now. It might last billions of years” tells us that 30 million pages, 25,000 songs and a whole bunch of art was left on the Moon by the Galactic Legacy Archive. Sunlight and gamma rays which bombard the Moon’s surface would break down paper, so the archive is etched in nickel, on thin layers so tiny that you need a microscope to read them. For the music and images there is a nickel-etched primer describing the digital encoding used. 

The cloud under the sea

Undersea Cables

The Cloud Under the Sea” is a very extensive article, with a really impressive set of graphics, etc., about cable ships that lay and maintain undersea communication cables, the backbone of the Internet.

I would be doing the article a disservice if I tried to summarise it, it’s worth more than that. And as a primer check out Wikipedia on “Submarine communication cable” and the “Cable layer“.

The Contested World of Classifying Life on Earth

Stuffed Birds

This article is about taxonomy, and specifically about the fact that there exists no single, unified list of all the species on Earth.

This appears to be a bit surprising, as are the heated discussions about how to come up with a single ranking list. Some people consider taxonomy as the most fundamental biological science because it reflects how humans think about and structure the world. A “common shared understanding” sounds like a good idea, but some expert groups wrote that is was “not only unnecessary and counterproductive, but also a threat to scientific freedom”. 

This is not an article that will interest everyone, but I just found the whole context both surprising and disappointing.

Identity theft

This article “Man pleads guilty to stealing former coworker’s identity for 30 years“, described how someone used a coworkers identity to commit crimes and rack up debt. In addition the victim was incarcerated after the thief accused the victim of identity theft and the police failed to detect who was who. 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top