AMD

Nov. 14th, 2024 07:17 am
paserbyp: (Default)
Advanced Micro Devices (AMD) is laying off 4% of its global workforce, around 1,000 employees, as it pivots resources to developing AI-focused chips. This marks a strategic shift by AMD to challenge Nvidia’s lead in the sector.

“As a part of aligning our resources with our largest growth opportunities, we are taking a number of targeted steps that will unfortunately result in reducing our global workforce by approximately 4%,” reported quoting an AMD spokesperson.

“We are committed to treating impacted employees with respect and helping them through this transition,” the spokesperson further added. However, it remains unclear which departments will experience the majority of the layoffs.

The latest layoffs were announced as AMD’s quarterly earnings reflected strong results – a strong increase in revenue as well as net profit.

This surprised many in the industry. Employees too demonstrated their shock in community chat platform Blind, where the information pertaining to the layoff came out first, which was later confirmed by the company.

However, on a deeper look, the Q3 results showed both strengths and challenges: while total revenue rose by 18% to $6.8 billion, gaming chip revenue plummeted 69% year-over-year, and embedded chip sales dropped 25%.

In its recent earnings call, AMD CEO Lisa Su underscored that the data center and AI business is now pivotal to the company’s future, expecting a 98% growth in this segment for 2024.

Su attributed the recent revenue gains to orders from clients like Microsoft and Meta, with the latter now adopting AMD’s MI300X GPUs for internal workloads.

However, unlike AMD’s relatively targeted job reductions, Intel recently implemented far larger cuts, eliminating approximately 15,000 positions amid its restructuring efforts.

AMD has been growing rapidly through initiatives such as optimizing Instinct GPUs for AI workloads and meeting data center reliability standards, which led to a $500 million increase in the company’s 2024 Instinct sales forecast.

Major clients like Microsoft and Meta too expanded their use of MI300X GPUs, with Microsoft using them for Copilot services and Meta deploying them for Llama models. Public cloud providers, including Microsoft and Oracle Cloud, along with several AI startups, also adopted MI300X instances.

This highlights AMD’s intensified focus on AI, which has driven its R&D spending up nearly 9% in the third quarter. The increased investment supports the company’s efforts to scale production of its MI325X AI chips, which are expected to be released later this year.

Besides, AMD has recently introduced its first open-source large language models under the OLMo brand, targeting a stronger foothold in the competitive AI market to compete against industry leaders like Nvidia, Intel, and Qualcomm.

“AMD could very well build a great full-stack AI proposition with a play across hardware, LLM, and broader ecosystem layers, giving it a key differentiator among other major silicon vendors, said Suseel Menon, practice director at Everest Group. AMD is considered Nvidia’s closest competitor in the high-value chip market, powering advanced data centers that handle the extensive data needs of generative AI technologies.

Mini PC

Jul. 9th, 2024 09:33 am
paserbyp: (Default)
A Chinese company is developing a mini PC that operates inside a foldable keyboard, making it portable enough to carry in your back pocket.

The device comes from Ling Long, which debuted the foldable keyboard PC on Chinese social media. To innovate in the mini PC space, the company packed AMD's laptop chip, the Ryzen 7 8840U, inside a collapsible keyboard.

The device was designed without sacrificing any of the PC’s capabilities. The motherboard and cooling fan are built inside one half of the product, while the 16,000 mAh battery is in the other. In the center is a hinge that allows the keyboard to fold.

When folded, the keyboard is about one-fourth the size of an Apple MacBook. All the keys are normal-sized, and the keyboard features a mini touchpad near the lower-right corner.

On the downside, the product lacks a built-in display —a feature that all conventional laptops possess. But the device has a USB-A and two USB-C ports, enabling it to connect to an external monitor and any accessories. Owners can also connect the device to a monitor or tablet wirelessly. It supports Wi-Fi 6 and promises to run from 4 to 10 hours, depending on the use case.

Although no launch date was given, Ling Long plans on selling the product for 4,699 yuan ($646). In the short-term, the company is offering the foldable keyboard during a beta early access period that’ll be limited to only 200 units. The company is currently accepting orders for these test units at only 2,699 yuan for the 16GB/512GB model and 3,599 yuan for the 32GB/1TB model.

More details: https://www.bilibili.com/video/BV1Dz421B7zi
paserbyp: (Default)
On Tuesday, the US General Services Administration began an auction for the decommissioned Cheyenne supercomputer, located in Cheyenne, Wyoming. The 5.34-petaflop supercomputer ranked as the 20th most powerful in the world at the time of its installation in 2016. Bidding started at $2,500, but it's price is currently $27,643 with the reserve not yet met(More details: https://gsaauctions.gov/auctions/preview/282996).

The supercomputer, which officially operated between January 12, 2017, and December 31, 2023, at the NCAR-Wyoming Supercomputing Center, was a powerful (and once considered energy-efficient) system that significantly advanced atmospheric and Earth system sciences research.

UCAR says that Cheynne was originally slated to be replaced after five years, but the COVID-19 pandemic severely disrupted supply chains, and it clocked in two extra years in its tour of duty. The auction page says that Cheyenne recently experienced maintenance limitations due to faulty quick disconnects in its cooling system. As a result, approximately 1 percent of the compute nodes have failed, primarily due to ECC errors in the DIMMs. Given the expense and downtime associated with repairs, the decision was made to auction off the components.

With a peak performance of 5,340 teraflops (4,788 Linpack teraflops), this SGI ICE XA system was capable of performing over 3 billion calculations per second for every watt of energy consumed, making it three times more energy-efficient than its predecessor, Yellowstone. The system featured 4,032 dual-socket nodes, each with two 18-core, 2.3-GHz Intel Xeon E5-2697v4 processors, for a total of 145,152 CPU cores. It also included 313 terabytes of memory and 40 petabytes of storage. The entire system in operation consumed about 1.7 megawatts of power.

Just to compare, the world's top-rated supercomputer at the moment—Frontier at Oak Ridge National Labs in Tennessee—features a theoretical peak performance of 1,679.82 petaflops, includes 8,699,904 CPU cores, and uses 22.7 megawatts of power.

The GSA notes that potential buyers of Cheyenne should be aware that professional movers with appropriate equipment will be required to handle the heavy racks and components. The auction includes seven E-Cell pairs (14 total), each with a cooling distribution unit (CDU). Each E-Cell weighs approximately 1,500 lbs. Additionally, the auction features two air-cooled Cheyenne Management Racks, each weighing 2,500 lbs, that contain servers, switches, and power units.

For now, 12 potential buyers have bid on this computing monster so far. The auction closes on May 5 at 6:11 pm Central Time if you're interested in bidding. But don't get too excited by photos of the extensive cabling: As the auction site notes, "fiber optic and CAT5/6 cabling are excluded from the resale package."
paserbyp: (Default)


When Apple announced the Macintosh personal computer with a Super Bowl XVIII television ad on January 22, 1984, it more resembled a movie premiere than a technology release. The commercial was, in fact, directed by filmmaker Ridley Scott. That’s because founder Steve Jobs knew he was not selling just computing power, storage or a desktop publishing solution. Rather, Jobs was selling a product for human beings to use, one to be taken into their homes and integrated into their lives.

This was not about computing anymore. IBM, Commodore and Tandy did computers. As a human-computer interaction scholar, I believe that the first Macintosh was about humans feeling comfortable with a new extension of themselves, not as computer hobbyists but as everyday people. All that “computer stuff” – circuits and wires and separate motherboards and monitors – were neatly packaged and hidden away within one sleek integrated box.

You weren’t supposed to dig into that box, and you didn’t need to dig into that box – not with the Macintosh. The everyday user wouldn’t think about the contents of that box any more than they thought about the stitching in their clothes. Instead, they would focus on how that box made them feel.

As computers go, was the Macintosh innovative?

Sure. But not for any particular computing breakthrough. The Macintosh was not the first computer to have a graphical user interface or employ the desktop metaphor: icons, files, folders, windows and so on. The Macintosh was not the first personal computer meant for home, office or educational use. It was not the first computer to use a mouse. It was not even the first computer from Apple to be or have any of these things. The Apple Lisa, released a year before, had them all.

It was not any one technical thing that the Macintosh did first. But the Macintosh brought together numerous advances that were about giving people an accessory – not for geeks or techno-hobbyists, but for home office moms and soccer dads and eighth grade students who used it to write documents, edit spreadsheets, make drawings and play games. The Macintosh revolutionized the personal computing industry and everything that was to follow because of its emphasis on providing a satisfying, simplified user experience.

Where computers typically had complex input sequences in the form of typed commands (Unix, MS-DOS) or multibutton mice (Xerox STAR, Commodore 64), the Macintosh used a desktop metaphor in which the computer screen presented a representation of a physical desk surface. Users could click directly on files and folders on the desktop to open them. It also had a one-button mouse that allowed users to click, double-click and drag-and-drop icons without typing commands.

The Xerox Alto had first exhibited the concept of icons, invented in David Canfield Smith’s 1975 Ph.D. dissertation. The 1981 Xerox Star and 1983 Apple Lisa had used desktop metaphors. But these systems had been slow to operate and still cumbersome in many aspects of their interaction design.

The Macintosh simplified the interaction techniques required to operate a computer and improved functioning to reasonable speeds. Complex keyboard commands and dedicated keys were replaced with point-and-click operations, pull-down menus, draggable windows and icons, and systemwide undo, cut, copy and paste. Unlike with the Lisa, the Macintosh could run only one program at a time, but this simplified the user experience.

The Macintosh also provided a user interface toolbox for application developers, enabling applications to have a standard look and feel by using common interface widgets such as buttons, menus, fonts, dialog boxes and windows. With the Macintosh, the learning curve for users was flattened, allowing people to feel proficient in short order. Computing, like clothing, was now for everyone.

Whereas prior systems prioritized technical capability, the Macintosh was intended for nonspecialist users – at work, school or in the home – to experience a kind of out-of-the-box usability that today is the hallmark of not only most Apple products but an entire industry’s worth of consumer electronics, smart devices and computers of every kind.

It is ironic that the Macintosh technology being commemorated in January 2024 was never really about technology at all. It was always about people. This is inspiration for those looking to make the next technology breakthrough, and a warning to those who would dismiss the user experience as only of secondary concern in technological innovation.

paserbyp: (Default)
Certain historic documents capture the most crucial paradigm shifts in computing technology, and they are priceless. Perhaps the most valuable takeaway from this tour of brilliance is that there is always room for new ideas and approaches.

Right now, someone, somewhere, is working on a way of doing things that will shake up the world of software development. Maybe it's you, with a paper that could wind up being #10 on this list. Just don’t be too quick to dismiss wild ideas—including your own.

So please take a look back over the past century (nearly) of software development, encoded in papers that every developer should read:

1. Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem (1936)

Turing's writing(https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf) has the character of a mind exploring on paper an uncertain terrain, and finding the landmarks to develop a map. What's more, this particular map has served us well for almost a hundred years.

It is a must-read on many levels, including as a continuation of Gödel's work on incompleteness(https://plato.stanford.edu/entries/goedel-incompleteness). Just the unveiling of the tape-and-machine idea makes it worthwhile.

More details: https://en.wikipedia.org/wiki/Entscheidungsproblem

2. John von Neumann: First Draft of a Report on the EDVAC (1945)

The von Neumann paper(https://web.mit.edu/STS.035/www/PDFs/edvac.pdf) asks what the character of a general computer would be, as it “applies to the physical device as well as to the arithmetical and logical arrangements which govern its functioning.” Von Neumann's answer was an outline of the modern digital computer.

3. John Backuss et al.: Specifications for the IBM Mathematical FORmula TRANSlating System, FORTRAN (1954)

The FORTRAN specification(https://archive.computerhistory.org/resources/text/Fortran/102679231.05.01.acc.pdf) gives a great sense of the moment and helped to create a model that language designers have adopted since. It captures the burgeoning sense of what was then just becoming possible with hardware and software.

4. Edsger Dijkstra: Go To Statement Considered Harmful (1968)

Aside from giving us the “considered harmful” meme, Edsger Dijkstra’s 1968 paper(https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf) not only identifies the superiority of loops and conditional control flows over the hard-to-follow go-to statement, but instigates a new way of thinking and talking about the quality of code.

Dijkstra’s short treatise also helped to usher in the generation of higher-order languages, bringing us one step closer to the programming languages we use today.

5. Diffie-Hellman: New Directions in Cryptography (1976)

When it landed, New Directions in Cryptography(https://www-ee.stanford.edu/~hellman/publications/24.pdf) set off an epic battle between open communication and government espionage agencies like the NSA. It was an extraordinary moment in software, and history in general, and we have it in writing. The authors also seemed to understand the radical nature of their proposal—after all, the paper's opening words were: “We stand today on the brink of a revolution in cryptography.”

6. Richard Stallman: The Gnu Manifesto (1985)

The Gnu Manifesto(https://www.gnu.org/gnu/manifesto.en.html) is still fresh enough today that it reads like it could have been written for a GitHub project in 2023. It is surely the most entertaining of the papers on this list.

7. Roy Fielding: Architectural Styles and the Design of Network-based Software Architectures (2000)

Fielding’s paper(https://ics.uci.edu/~fielding/pubs/dissertation/top.htm) introducing the REST architectural style landed in 2000, it summarized lessons learned in the '90’s distributed programming environment, then proposed a way forward. In this regard, I believe it holds title for two decades of software development history.

8. Satoshi Nakamoto: Bitcoin: A Peer-to-Peer Electronic Cash System (2008)

The now-famous Nakamoto paper(https://bitcoin.org/bitcoin.pdf) was written by a person, group of people, or entity unknown. It draws together all the prior art in digital currencies and summarizes a solution to their main problems. In particular, the Bitcoin paper addresses the double-spend problem.

Beyond the simple notion of a currency like Bitcoin, the paper suggested an engine that could leverage cryptography in producing distributed virtual machines like Ethereum.

The Bitcoin paper is a wonderful example of how to present a simple, clean solution to a seemingly bewildering mess of complexity.

9. Martin Abadi et al.: TensorFlow: A System for Large-Scale Machine Learning (2015)

This paper(https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf) , by Martín Abadi and a host of contributors too extensive to list, focuses on the specifics of TensorFlow, especially in making a more generalized AI platform. In the process, it provides an excellent, high-level tour of the state of the art in machine learning. Great reading for the ML curious and those looking for a plain-language entry into a deeper understanding of the field.
paserbyp: (Default)


All of the best 3D printers print from some form plastic, either from filament or from resin. But an upcoming printer, Cocoa Press, uses chocolate to create models you can eat. The brainchild of Maker and Battlebots Competitor Ellie Weinstein , who has been working on iterations of the printer since 2014, Cocoa Press will be available for pre-order, starting on April 17th via cocoapress.com (the company is also named Cocoa Press).

Cocoa Press DIY kits will start at $1,499 and are estimated to ship in September while professional packages, which come fully built, will cost $3,995 and ship in early 2024. When reserving your printer, you'll only have to put $100 deposit down with the rest due at shipping time. The company says that it should take 10 hours to put together the DIY kit.

The Cocoa Press has a build volume of 140 x 150 x 150 mm, which is small for a regular 3D printer, but more than adequate for most chocolate creations. Unlike most plastic filaments that need to be heated to at 200 to 250 degrees Celsius, this printer only heats its chocolate up to 33 degrees Celsius (91.4 degrees Fahrenheit), which is just short of body temperature. The bed is not heated.

In lieu of a roll of filament or a tank full of resin, the Cocoa Press uses 70g cartridges of special chocolate that solidifies at up to 26.67 degrees Celsius (80 degrees Fahrenheit), which the company will sell for $49 for a 10 pack. The cigar-shaped chocolate pieces go into a metal syringe where the entire thing is melted at the same time rather than melting as it passes through the extruder (like a typical FDM printer).

The printer is safe and sanitary as the chocolate only touches four parts, which are all easy to remove (without tools) and clean in a sink. The Cocoa Press has an attractive orange, silver and black aesthetic that's reminiscent of a Prusa Mini+.

It uses an Ultimachine Archim2 32-bit processor that's powered by Marlin firmware, the same type which comes on most FDM printers. You can use standard 3D models that you create or download from sites like Printables or Thingiverse and then slice them in PrusaSlicer.

This is not the very first printer that Cocoa Press has released. Weinstein's company sold a larger and much more expensive model, technically known as version 5 "Chef," for $9,995 back in 2020, but she stopped producing that and is now focusing on the less expensive, smaller model. She told us that everyone who bought the old model will receive a free copy of the new one.

Metacloud

Jul. 24th, 2022 05:28 pm
paserbyp: (Default)
Terms that are beginning to emerge, such as “supercloud,” “distributed cloud,” “metacloud”, and “abstract cloud.” Even the term “cloud native” is up for debate. The common pattern seems to be a collection of public clouds and sometimes edge-based systems that work together for some greater purpose.

The metacloud concept will be the single focus for the next 5 to 10 years as we begin to put public clouds to work. Having a collection of cloud services managed with abstraction and automation is much more valuable than attempting to leverage each public cloud provider on its terms rather than yours.

We want to leverage public cloud providers through abstract interfaces to access specific services, such as storage, compute, artificial intelligence, data, etc., and we want to support a layer of cloud-spanning technology that allows us to use those services more effectively. A metacloud removes the complexity that multicloud brings these days. Also, scaling operations to support multicloud would not be cost-effective without this cross-cloud layer of technology.

Thus, we’ll only have a single layer of security, governance, operations, and even application development and deployment. This is really what a multicloud should become. If we attempt to build more silos using proprietary tools that only work within a single cloud, we’ll need many of them. We’re just building more complexity that will end up making multicloud more of a liability than an asset.

I really don’t care what we call it and however, this does not change the fact that metacloud is perhaps the most important architectural evolution occurring right now, and we need to get this right out of the gate. If we do that, who cares what it is named.
paserbyp: (Default)


While social distancing might have become less of a priority for many as 2021 has drawn to a close, this DIY project helps bring the distant members of your household (including kids or spouses working from home) a little closer. It started as an effort to keep a quarantined-for-weeks father in touch with his daughter in the same house.

Thanks to the creative use of a Telegram voice-chat bot, you can use Raspberry Pi to call family members to dinner or let them know that Daddy or Mommy (hopefully, not both at once) is leaving the house for a mental health day. All that is needed here is a Raspberry Pi board, a USB microphone, a USB speaker, a few buttons, and a willingness not to shout at family members who don’t respond immediately.




Another ingenious way that Raspberry Pi has been tweaked to help individuals and communities during the pandemic: How about as a COVID cop? By using facial landmarking software and infrared temperature sensors, this Pi project makes for an affordable, touch-free kiosk that provides contactless temperature checks and confirms that each person that passes is wearing a mask.

Because fever is the leading symptom of COVID-19, temperature checkpoints have been staples in some schools, offices, and other workplaces. It's not always possible, though, to manually check temperatures using a contactless thermometer (you need the personnel to do that), and they place the person testing temperatures at risk of exposure.

To solve these problems, 19-year-old designed a kiosk that automates the process of temperature checks by using facial landmarking, deep-learning tech, and an IR temperature sensor. Behold, above, the TouchFree v2: Contactless Temperature and Mask Checkup. His model was made with a Raspberry Pi 3 Model B, a Pi Touch display, a Pi Camera Module, and several other bits, supplemented by a 3D printed structure. The Raspbian variant serves as the OS.


Although this project seems simple, it’s a fantastic way for beginners to work on basic logic, or for more advanced makers to show off their skills. Using LEDs to go from simple patterns to complex transitions, you can create a stunning visual-accent piece for your home.

Add a microphone, and you can even program the LEDs to alter their color or pattern based on sounds, reacting to voices or music in your home. The LumiCube project shown above is projected to be available as a kit in 2022 via an Indiegogo push (preceded by a Kickstarter) that was running at this writing. (In this design, you would bring your own Pi to the kit.) The "project source" link further up showcases the build and programming behind it.



One of the most popular uses for Raspberry Pi hardware is to breathe new life into older technology that would have otherwise joined a landfill. Take that dead old Apple laptop out of your closet or grab that '90s-era IBM ThinkPad from your parents’ attic, and use any of them as a housing for a Raspberry Pi!

Older laptops are ideal for this project, as their chassis' larger internal volume provides enough space for the hardware you need. Most of the Pi hardware can be placed under the keyboard after gutting the original internals. In the example project above, the user hollowed out an older MacBook laptop and showed off the project at this Reddit thread(https://www.reddit.com/r/raspberry_pi/comments/n4xzd0/macbook_pi_from_an_old_a1181_and_rpi4).

Mind you, none of this will be easy. The original onboard batteries may need to be replaced or modified to power both the Raspberry Pi and the display, with careful selection of a voltage regulator or switching power supply to ensure the Raspberry Pi sees just 5 volts of input. Also note that external controllers will be needed to get the laptop screen acting as the Pi display. (Assuming that that part of the laptop still works!) Not one for the faint of heart or hardware.

Many resourceful gamers have created retro-gaming consoles using emulation software running on Raspberry Pi. The newest variation on this theme is creating mobile game consoles the same size as the original Nintendo Game Boy, or fitting an entire retro gaming console inside a single SNES game cartridge. How's that for a dramatic meta-illustration of how far things have come in video gaming?

he SNES:Pi Zero runs the RetroPie operating system. RetroPie can emulate thousands of games. (Of course, you need to download the game ROMs separately.) The Raspberry Pi Zero W at the heart of the SNES cartridge can run the majority of console games released prior to the N64. Hardware inside comprises a USB hub, the Pi Zero, and a micro SD card. You will, of course, also need an external display or TV.



Stop wasting time installing ad blockers on every device and every web browser in your home and use the power of Raspberry Pi to block ads across your entire home network. The Pi-hole is a DNS-based filtering tool that runs on the super-cheap Raspberry Pi Zero and blocks ads before any other device on your Wi-Fi network gets involved.

If you’re concerned about blocking ads that you need to see (maybe your boss wants you to confirm that ads are working on the company website) you can also whitelist specific URLs so those ads remain untouched.



You want to block door-to-door salespeople or other solicitors. Or maybe the pandemic has left you unready to deal with actual humans at your front door quite yet. Then try a Raspberry Pi-based smart doorbell with video intercom. For this project, you’ll need an LCD screen, a call button, a speaker, a microphone, and a camera so you can video chat with people at your front door. (It can also be implemented as a room-to-room intercom.)

With this project, you can implement a simple script to send a Gmail notification to your phone whenever someone presses the doorbell/intercom button. From there, the Raspberry Pi can use a free video conferencing service like Jitsi Meet to enable a live video chat. The bits include a Pi 3 Model B; LCD, camera, and mic components; and a host of internal connectors.



No one appreciates having their Amazon deliveries stolen from their front porch. You can find a variety of cheap alarms online that use motion detectors, but then you have an alarm that goes off every time someone approaches your front door.

his porch pirate alarm project uses Raspberry Pi and artificial intelligence to identify when packages have been delivered and then sounds an alarm only when a package has been removed. You can even set up the camera to send you footage of the thief so you can provide that evidence to local authorities. The model demonstrated here is built off of a Raspberry Pi 4 and a Wyse video camera.



Why use a boring Amazon Echo with Alexa or a Google Home device when you can use Raspberry Pi to bring HAL 9000 from 2001: A Space Odyssey to life? Hal won’t be a fictional artificial intelligence after you’ve finished this project.

Okay, the guts may be a bit less impressive than the film may suggest they should be: At its core, this Raspberry Pi HAL 9000 is just a humble Pi Model 3 or 4, a speaker, a USB microphone, and a red LED with some creative window dressing. The maker here also has the HAL-alike serving as a NAS. Just remember to run if HAL starts singing “Daisy Bell.”

Why should we hominids be the only ones to benefit from cheap DIY tech? You can use the power of Raspberry Pi, a button, and a simple motor to build a dog treat dispenser so humanity's best friend can reap the rewards of artificial intelligence. (Granted, the pegboard-mounted design is a bit rustic.)

Of course, the only thing better than pressing a button to deliver dog treats is to have it happen automatically. This Reddit user set up his Raspberry Pi as a web scraper to detect when his Instagram account gets a new follower. The follower addition then triggers the motor, which, in turn, activates the treat dispenser.






Yes, before you ask: Of course Ring cameras already exist. Those and other cheap security cameras are a great way to keep an eye on your home, but now you can use Raspberry Pi and some creativity to identify family and friends, and send you an alert to let you know if, say, a buddy or your mother-in-law drops by when you’re away, but doesn't leave a note.

The basic setup consists of a Raspberry Pi 4 Model B, a micro SD card, a camera with enclosure, and a power supply. Once you’ve set up the camera, you can send the footage to another client device or send it straight to your phone. With some additional programming, you can use facial recognition to identify specific people and have the Pi send you an alert when an unknown person visits your home.



Get rid of that boring old mirror in your living room and replace it with a modern version of the mythical magic mirror! A smart mirror displays web-based applications that let you check local news, the weather, your daily calendar, and more while you preen, pluck, shave, apply makeup, or dress. The Pi comes into play as the compute source and the driver of the built-in video.

This project involves some basic carpentry (for the mirror frame) and benefits from a 3D printer for some of the framework, but you can keep it simpler: You can use an old LED desktop monitor, an acrylic see-through mirror, and a Raspberry Pi as the basis for your own smart mirror.
paserbyp: (Default)
The U.S. National Security Agency (NSA) has issued a FAQ(https://media.defense.gov/2021/Aug/04/2002821837/-1/-1/1/Quantum_FAQs_20210804.PDF) titled "Quantum Computing and Post-Quantum Cryptography FAQs" where the agency explores the potential implications for national security following the likely arrival of a "brave new world" beyond the classical computing sphere. As the race for quantum computing accelerates, with a myriad of players attempting to achieve quantum supremacy through various, exotic scientific investigation routes, the NSA document explores the potential security concerns arising from the prospective creation of a “Cryptographically Relevant Quantum Computer” (CRQC).

A CRQC is the advent of a quantum-based supercomputer that is powerful enough to break current, classical-computing-designed encryption schemes. While these schemes (think AES-256, more common on the consumer side, or RSA 3072-bit or larger for asymmetrical encryption algorithms) are virtually impossible to crack with current or even future supercomputers, a quantum computer doesn't play by the same rules due to the nature of the beast and the superposition states available to its computing unit, the qubit.

With the race for quantum computing featuring major private and state players, it's not just the expected $26 billion value of the quantum computing sphere by 2030 that worries security experts - but the possibility of quantum systems falling into the hands of rogue entities. We need only look to the history of hacks in the blockchain sphere to see that where there is an economic incentive, there are hacks - and data is expected to become the number one economic source in a (perhaps not so) distant future.

Naturally, an entity such as the NSA, which ensures the safety of the U.S.'s technological infrastructure, has to not only deal with present threats, but also future ones - as one might imagine, it takes an inordinate amount of time for entities as grand as an entire country's critical government systems to be updated.

According to the NSA, "New cryptography can take 20 years or more to be fully deployed to all National Security Systems (NSS)". And as the agency writes in its document, "(...) a CRQC would be capable of undermining the widely deployed public key algorithms used for asymmetric key exchanges and digital signatures. National Security Systems (NSS) — systems that carry classified or otherwise sensitive military or intelligence information — use public key cryptography as a critical component to protect the confidentiality, integrity, and authenticity of national security information. Without effective mitigation, the impact of adversarial use of a quantum computer could be devastating to NSS and our nation, especially in cases where such information needs to be protected for many decades."

The agency's interest in quantum computing is such, even, that as a part of the document trove leaked by Edward Snowden(https://www.washingtonpost.com/world/national-security/nsa-seeks-to-build-quantum-computer-that-could-crack-most-types-of-encryption/2014/01/02/8fff297e-7195-11e3-8def-a33011492df2_story.html?hpid=z1), it was revealed that the agency invested $79.7 million in a research program titled “Penetrating Hard Targets” - which aimed to explore whether a quantum computer for actually breaking traditional encryption protocols was feasible to pursue at the time.

This is especially important considering that an algorithm that can be employed by a quantum computer to break traditional encryption schemes already exists in the form of Schor's algorithm, first demonstrated in 1994 - before humanity's control over the qubit was all but a distant dream. The only thing standing in the way of the Schor algorithm's implementation at a quantum level is that it requires a much larger amount of qubits than is currently feasible - orders of magnitude higher than today's most advanced quantum computing designs, that max out at around "only" one hundred qubits.

It is only a matter of time, however, before such systems exist. The answer lies in the creation and deployment of so-called post-quantum cryptography - encryption schemes designed to give pause to or even completely thwart future CRQCs. These already exist. However, their deployment at a time where the cryptographic security threat of quantum computing still lays beyond the horizon, implementing post-quantum cryptography would present issues in terms of infrastructure interoperability - different systems from different agencies and branches sharing confidential information between themselves and understanding what they're transmitting between each other.

In its documentation, NSA puts the choice on exactly what post-quantum cryptography will be implemented by the U.S. national infrastructure on the feet of the National Institute of Standards and Technologies (NIST), which is "in the process of standardizing quantum-resistant public key in their Post-Quantum Standardization Effort, which started in 2016. This multi-year effort is analyzing a large variety of confidentiality and authentication algorithms for inclusion in future standards," the NSA writes.

But contrary to what some would have you think, the NSA knows that it's a matter of time before quantum computing turns the security world on its proverbial head. There's no stopping the march of progress; as the agency writes, "The intention is to (...) remove quantum-vulnerable algorithms and replace them with a subset of the quantum-resistant algorithms selected by NIST at the end of the third round of the NIST post-quantum effort."

Quantum is coming; Post-quantum security must come before it.
paserbyp: (Default)


On August 12, 1981, IBM introduced the IBM Personal Computer: https://www.pcmag.com/news/project-chess-the-story-behind-the-original-ibm-pc

List of innovative IBM machines from the first decade of the PC, 1981-1990: https://www.pcmag.com/news/the-golden-age-of-ibm-pcs

IBM Press Release: https://www.ibm.com/ibm/history/exhibits/pc25/pc25_press.html

Vinyl

Nov. 24th, 2020 03:11 pm
paserbyp: (Default)


Most PCs tend to boot from a primary media storage, be it a hard disk drive, or a solid-state drive, perhaps from a network, or – if all else fails – the USB stick or the boot DVD comes to the rescue… Fun, eh? Boring! Why don’t we try to boot from a record player for a change?

So this nutty little experiment connects a PC, or an IBM PC(Details: http://boginjr.com/electronics/old/ibm5150) to be exact, directly onto a record player through an amplifier. There is a small ROM boot loader that operates the built-in “cassette interface” of the PC (that was hardly ever used), invoked by the BIOS if all the other boot options fail, i.e. floppy disk and the hard drive. The turntable spins an analog recording of a small bootable read-only RAM drive, which is 64K in size. This contains a FreeDOS kernel, modified to cram it into the memory constraint, a micro variant of COMMAND.COM and a patched version of INTERLNK, that allows file transfer through a printer cable, modified to be runnable on FreeDOS. The bootloader reads the disk image from the audio recording through the cassette modem, loads it to memory and boots the system on it.

And now to get more technical: this is basically a merge between BootLPT/86(Details: http://boginjr.com/it/sw/dev/bootlpt-86) and 5150CAXX(Details: http://boginjr.com/it/sw/dev/5150caxx), minus the printer port support. It also resides in a ROM, in the BIOS expansion socket, but it does not have to. The connecting cable between the PC and the record player amplifier is the same as with 5150CAXX, just without the line-in (PC data out) jack.

The “cassette interface” itself is just PC speaker timer channel 2 for the output, and 8255A-5 PPI port C channel 4 (PC4, I/O port 62h bit 4) for the input. BIOS INT 15h routines are used for software (de)modulation.

The boot image is the same 64K BOOTDISK.IMG “example” RAM drive that can be downloaded at the bottom of the BootLPT article. This has been turned into an “IBM cassette tape”-protocol compliant audio signal using 5150CAXX, and sent straight to a record cutting lathe.

Vinyls are cut with an RIAA equalization curve that a preamp usually reverses during playback, but not perfectly. So some signal correction had to be applied from the amplifier and make it work right with the line output straight from the phono preamp. In this case, involving a vintage Harman&Kardon 6300 amplifier with an integrated MM phono preamp, you had to fade the treble all the way down to -10dB/10kHz, increase bass equalization to approx. +6dB/50Hz and reduce the volume level to approximately 0.7 volts peak, so it doesn’t distort. All this, naturally, with any phase and loudness correction turned off.

Of course, the cassette modem does not give a hoot in hell about where the signal is coming from. Notwithstanding, the recording needs to be pristine and contain no pops or loud crackles (vinyl) or modulation/frequency drop-outs (tape) that will break the data stream from continuing. However, some wow is tolerated, and the speed can be 2 or 3 percent higher or lower too.

For those interested, the bootloader binary designed for a 2364 chip (2764s can be used, through an adaptor), can be obtained here: http://boginjr.com/apps/vinyl-boot/BootVinyl.bin

It assumes an IBM 5150 with a monochrome screen and at least 512K of RAM. The boot disk image can be obtained at the bottom of the BootLPT/86 article, and here’s its analog variant: http://boginjr.com/misc/bootdisk.flac

Z4

Oct. 17th, 2020 08:05 am
paserbyp: (Default)
Z4 considered the oldest preserved digital computer in the world and one of those machines that takes up a whole room, runs on magnetic tapes, and needs multiple people to operate. Today it sits in the Deutsches Museum in Munich, unused. Until now, historians and curators only had a limited knowledge of its secrets because the manual was lost long ago.

The computer’s inventor, Konrad Zuse, first began building it for the Nazis in 1942, then refused its use in the VI and V2 rocket program. Instead, he fled to a small town in Bavaria and stowed the computer in a barn until the end of the war. It wouldn’t see operation until 1950. The Z4 proved to be a very reliable and impressive computer for its time. With its large instruction set it was able to calculate complicated scientific programs and was able to work during the night without supervision, which was unheard of for this time.

These qualities made the Zuse Z4 particularly useful to the Institute of Applied Mathematics at the Swiss Federal Institute of Technology (ETH), where the computer performed advanced calculations for Swiss engineers in the early 50s. Around 100 jobs were carried out with the Z4 between 1950 and 1955. These included calculations on the trajectory of rockets… on aircraft wings… and on flutter vibrations, an operation requiring 800 hours machine time.

René Boesch, one of the airplane researchers working on the Z4 in the 50s kept a copy of the manual among his papers, and it was there that his daughter, Evelyn Boesch, also an ETH researcher, discovered it. View it online here: https://www.e-manuscripta.ch/zut/content/pageview/2856521

The full story of the computer’s development, operation, and the rediscovery of its only known copy of operating instructions you can view here: https://cacm.acm.org/blogs/blog-cacm/247521-discovery-user-manual-of-the-oldest-surviving-computer-in-the-world/fulltext
paserbyp: (Default)
It was a cloudy Seattle day in late 1980, and Bill Gates, the young chairman of a tiny company called Microsoft, had an appointment with IBM that would shape the destiny of the industry for decades to come.

He went into a room full of IBM lawyers, all dressed in immaculately tailored suits. Bill’s suit was rumpled and ill-fitting, but it didn’t matter. He wasn’t here to win a fashion competition.

Over the course of the day, a contract was worked out whereby IBM would purchase, for a one-time fee of about $80,000, perpetual rights to Gates’ MS-DOS operating system for its upcoming PC. IBM also licensed Microsoft’s BASIC programming language, all that company's other languages, and several of its fledging applications. The smart move would have been for Gates to insist on a royalty so that his company would make a small amount of money for every PC that IBM sold.

But Gates wasn’t smart. He was smarter.

In exchange for giving up perpetual royalties on MS-DOS, which would be called IBM PC-DOS, Gates insisted on retaining the rights to sell DOS to other companies. The lawyers looked at each other and smiled. Other companies? Who were they going to be? IBM was the only company making the PC. Other personal computers of the day either came with their own built-in operating system or licensed Digital Research’s CP/M, which was the established standard at the time.

Gates wasn’t thinking of the present, though. “The lesson of the computer industry, in mainframes, was that over time people built compatible machines,” Gates explained in an interview for the 1996 PBS documentary Triumph of the Nerds. As the leading manufacturer of mainframes, IBM experienced this phenomenon, but the company was always able to stay ahead of the pack by releasing new machines and relying on the power of its marketing and sales force to relegate the cloners to also-ran status.

The personal computer market, however, ended up working a little differently. PC Cloners were smaller, faster, and hungrier companies than their mainframe counterparts. They didn’t need as much startup capital to start building their own machines, especially after Phoenix and other companies did legal, clean-room, reverse-engineered implementations of the BIOS (Basic Input/Output System) that was the only proprietary chip in the IBM PC’s architecture. To make a PC clone, all you needed to do was put a Phoenix BIOS chip into your own motherboard design, design and manufacture a case, buy a power supply, keyboard, and floppy drive, and license an operating system. And Bill Gates was ready and willing to license you that operating system.

IBM went ahead and tried to produce a new model computer to stay ahead of the cloners, but the PC/AT’s day in the sun was short-lived. Intel was doing a great business selling 286 chips to clone companies, and buyers were excited to snap up 100 percent compatible AT clones at a fraction of IBM’s price.

Intel and Microsoft were getting rich, but IBM’s share of the PC pie was getting smaller and smaller each year. Something had to be done—the seeds were sown for the giant company to fight an epic battle to regain control of the computing landscape from the tiny upstarts.

IBM had only gone to Microsoft for an operating system in the first place because it was pressed for time. By 1980, the personal computing industry was taking off, causing a tiny revolution in businesses all over the world. Most big companies had, or had access to, IBM mainframes. But these were slow and clunky machines, guarded by a priesthood of technical administrators and unavailable for personal use. People would slyly bring personal computers like the TRS-80, Osborne, and Apple II into work to help them get ahead of their coworkers, and they were often religious fanatics about them. “The concern was that we were losing the hearts and minds,” former IBM executive Jack Sams said in an interview. “So the order came down from on high: give us a machine to win us back the hearts and minds.” But the chairman of IBM worried that his company’s massive bureaucracy would make any internal PC project take years to produce, by which time the personal computer industry might already be completely taken over by non-IBM machines.

So a rogue group in Boca Raton, Florida—far away from IBM headquarters—was allowed to use a radical strategy to design and produce a machine using largely off-the-shelf parts and a third-party CPU, operating system, and programming languages. It went to Microsoft to get the last two, but Microsoft didn’t have the rights to sell them an OS and directed the group to Digital Research, who was preparing a 16-bit version of CP/M that would run on the 8088 CPU that IBM was putting into the PC. In what has become a legendary story, Digital Research sent IBM’s people away when Digital Research’s lawyers refused to sign a non-disclosure agreement. Microsoft, worried that the whole deal would fall apart, frantically purchased the rights to Tim Patterson’s QDOS (“Quick and Dirty Operating System”) from Seattle Computer Products. Microsoft “cleaned up” QDOS for IBM, getting rid of the unfortunate name and allowing the IBM PC to launch on schedule. Everyone was happy, except perhaps Digital Research’s founder, Gary Kildall.

But that was all in the past. It was now 1984, and IBM had a different problem: DOS was pretty much still a quick and dirty hack. The only real new thing that had been added to it was directory support so that files could be organized a bit better on the IBM PC/AT’s new hard disk. And thanks to the deal that IBM signed in 1980, the cloners could get the exact same copy of DOS and run exactly the same software. IBM needed to design a brand new operating system to differentiate the company from the clones. Committees were formed and meetings were held, and the new operating system was graced with a name: OS/2.

Long before operating systems got exciting names based on giant cats and towns in California named after dogs, most of their names were pretty boring. IBM would design a brand new mainframe and release an operating system with a similar moniker. So the new System/360 mainframe line would run the also brand-new OS/360. It was neat and tidy, just like an IBM suit and jacket.

IBM wanted to make a new kind of PC that couldn’t be as easily cloned as its first attempt, and the company also wanted to tie it, in a marketing kind of way, to its mainframes. So instead of a Personal Computer or PC, you would have a Personal System (PS), and since it was the successor to the PC, it would be called the PS/2. The new advanced operating system would be called OS/2.

Naming an OS was a lot easier than writing it, however, and IBM management still worried about the length of time that it would take to write such a thing itself. So instead, the group decided that IBM would design OS/2 but Microsoft would write most of the actual code. Unlike last time, IBM would fully own the rights to the product and only IBM could license it to third parties.

Why would Microsoft management agree to develop a project designed to eliminate the very cash cow that made them billionaires? Steve Ballmer explained:

“It was what we used to call at the time ‘Riding the Bear.' You just had to try to stay on the bear’s back, and the bear would twist and turn and try to throw you off, but we were going to stay on the bear, because the bear was the biggest, the most important… you just had to be with the bear, otherwise you would be under the bear.”

IBM was a somewhat angry bear at the time as the tiny ferrets of the clone industry continued to eat its lunch, and many industry people started taking OS/2 very, very seriously before it was even written. What it didn’t know was that events were going to conspire to make OS/2 a gigantic failure right out of the gate.

In 1984, IBM released the PC/AT, which sported Intel’s 80286 central processor. The very next year, however, Intel released a new chip, the 80386, that was better than the 286 in almost every way.

The 286 was a 16-bit CPU that could address up to 16 megabytes of random access memory (RAM) through a 24-bit address bus. It addressed this memory in a slightly different way from its older, slower cousin the 8086, and the 286 was the first Intel chip to have memory management tools built in. To use these tools, you had to enter what Intel called “protected mode," in which the 286 opened up all 24 bits of its memory lines and went full speed. If it wasn’t in protected mode, it was in “real” mode, where it acted like a faster 8086 chip and was limited to only one megabyte of RAM (the 640KB limit was an arbitrary choice by IBM to allow for the original PC to use the extra bits of memory for graphics and other operations).

The trouble with protected mode in the 286 was that when you were in it, you couldn’t get back to real mode without a reboot. Without real mode it was very difficult to run MS-DOS programs, which expected to have full access and control of the computer at all times. Bill Gates knew everything about the 286 chip and called it “brain-damaged," but for Intel, it was a transitional CPU that led to many of the design decisions of its successor.

The 386 was Intel’s first truly modern CPU. Not only could it access a staggering 4GB of RAM in 32-bit protected mode, but it also added a “Virtual 8086” mode that could run at the same time, allowing many full instances of MS-DOS applications to operate simultaneously without interfering with each other. Today we take virtualization for granted and happily run entire banks of operating systems at once on a single machine, but in 1985 the concept seemed like it was from the future. And for IBM, this future was scary.

The 386 was an expensive chip when it was introduced, but IBM’s experience with the PC/AT told the company that the price would clearly come down over time. And a PC with a 386 chip and a proper 386-optimized operating system, running multiple virtualized applications in a huge memory space… that sounded an awful lot like a mainframe, only at PC clone prices. So should OS/2 be designed for the 386? IBM’s mainframe division came down on this idea like a ton of bricks. Why design a system that could potentially render mainframes obsolete?

So OS/2 was to run on the 286, and DOS programs would have to run one at a time in a “compatibility box” if they could be run at all. This wasn’t such a bad thing from IBM’s perspective, as it would force people to move to OS/2-native apps that much faster. So the decision was made, and Microsoft and Bill Gates would just have to live with it.

There was another problem that was happening in 1985, and both IBM and Microsoft were painfully aware of it. The launch of the Macintosh in ’84 and the Amiga and Atari ST in ’85 showed that reasonably priced personal computers were now expected to come with a graphical user interface (GUI) built in. Microsoft rushed to release the laughably underpowered Windows 1.0 in the same year so that it could have a stake in the GUI game. IBM would have to do the same or fall behind.

The trouble was that GUIs took a while to develop, and they took up more resources than their non-GUI counterparts. In a world where most 286 clones came with only 1MB RAM standard, this was going to pose a problem. Some GUIs, like the Workbench that ran on the highly advanced Amiga OS, could squeeze into a small amount of RAM, but AmigaOS was designed by a tiny group of crazy geniuses. OS/2 was being designed by a giant IBM committee. The end result was never going to be pretty.

OS/2 was plagued by delays and bureaucratic infighting. IBM rules about confidentiality meant that some Microsoft employees were unable to talk to other Microsoft employees without a legal translator between them. IBM also insisted that Microsoft would get paid by the company's standard contractor rates, which were calculated by “kLOCs," or a thousand lines of code. As many programmers know, given two routines that can accomplish the same feat, the one with fewer lines of code is generally superior—it will tend to use less CPU, take up less RAM, and be easier to debug and maintain. But IBM insisted on the kLOC methodology.

All these problems meant that when OS/2 1.0 was released in December 1987, it was not exactly the leanest operating system on the block. Worse than that, the GUI wasn’t even ready yet, so in a world of Macs and Amigas and even Microsoft Windows, OS/2 came out proudly dressed up in black-and-white, 80-column, monospaced text.

OS/2 did have some advantages over the DOS it was meant to replace—it could multitask its own applications, and each application would have a modicum of protection from the others thanks to the 286’s memory management facilities. But OS/2 applications were rather thin on the ground at launch, because despite the monumental hype over the OS, it was still starting out at ground zero in terms of market share. Even this might have been something that could be overcome were it not for the RAM crisis.

RAM prices had been trending down for years, from $880 per MB in 1985 to a low of $133 per MB in 1987. This trend sharply reversed in 1988 when demand for RAM and production difficulties in making larger RAM chips caused a sudden shortfall in the market. With greater demand and constricted supply, RAM prices shot up to over $500 per MB and stayed there for two years.

Buyers of clone computers had a choice: they could stick with the standard 1MB of RAM and be very happy running DOS programs and maybe even a Windows app (Windows 2.0 had come out in December of 1987 and while it wasn’t great, it was at least reasonable, and it did barely manage to run with that much memory). Or they could buy a copy of OS/2 1.0 Standard Edition from IBM for $325 and then pay an extra $1,000 to bump up to 3MB of RAM, which was necessary to run both OS/2 and its applications comfortably.

Needless to say, OS/2 was not an instant smash hit in the marketplace.

But wait. Wasn’t OS/2 supposed to be a differentiator for IBM to sell its shiny new PS/2 computers? Why would IBM want to sell it to the owners of clone computers anyway? Wasn’t it necessary to own a PS/2 in order to run OS/2 in the first place?

This confusion wasn’t an accident. IBM wanted people to think this way.

IBM had spent a lot of time and money developing the PS/2 line of computers, which was released in 1987, slightly before OS/2 first became available. The company ditched the old 16-bit Industry Standard Architecture (ISA), which had become the standard among all clone computers, and replaced it with its proprietary Micro Channel Architecture (MCA), a 32-bit bus that was theoretically faster. To stymie the clone makers, IBM infused MCA with the most advanced legal technology available, so much so that third-party makers of MCA expansion cards actually had to pay IBM a royalty for every card sold. In fact, IBM even tried to collect back-pay royalties for ISA cards that had been sold in the past.

The PS/2s also were the first PCs to switch over to 3.5-inch floppy drives, and they also pioneered the little round connectors for the keyboard and mouse that remain on some motherboards to this day. They were attractively packaged and fairly reasonably priced at the low end, but the performance just wasn’t there. The PS/2 line started with the Models 25 and 30, which had no Micro Channel and only a lowly 8086 running at conservatively slow clock speeds. They were meant to get buyers interested in moving up to the Models 50 and 60, which used 286 chips and had MCA slots, and the high-end Models 70 and 80, which came with a 386 chip and a jaw-droppingly high price tag to go with it. You could order the Model 50 and higher with OS/2 once it became available. You didn’t just have to stick with the “Standard Edition" either. IBM also offered an “Extended Edition” of OS/2 that came equipped with a communications suite, networking tools, and an SQL manager. The Extended Edition would only run on true-blue IBM PS/2 computers—no clones were allowed to that fancy dress party.

These machines were meant to wrestle control of the PC industry away from the clone makers, but they were also meant to subtly push people back toward a world where PCs were the servants and mainframes were the masters. They were never allowed to be too fast or run a proper operating system that would take advantage of the 32-bit computing power available with the 386 chip. In trying to do two contradictory things at once, they failed at both.

The clone industry decided not to bother tangling with IBM’s massive legal department and simply didn’t try to clone the PS/2 on anything other than a cosmetic level. Sure, they couldn’t have the shiny new MCA expansion slots, but since MCA cards were rare and expensive and the performance was throttled back anyway, it wasn’t so bad to stick with ISA slots instead. Compaq even brought together a consortium of PC clone vendors to create a new standard bus called EISA, which filled in the gaps at the high end until other standards became available. And the crown jewel of the PS/2, the OS/2 operating system, was late. It was also initially GUI-less, and when the GUI did come with the release of OS/2 1.1 in 1988, it required too much RAM to be economically viable for most users.

As the market shifted and the clone makers started selling more and more fast and cheap 386 boxes with ISA slots, Bill Gates took one of his famous “reading week” vacations and emerged with the idea that OS/2 probably didn’t have a great future. Maybe the IBM Bear was getting ready to ride straight off a cliff. But how does one disentangle from riding a bear, anyway? The answer was "very, very carefully."

It was late 1989, and Microsoft was hard at work putting the final touches on what the company knew was the best release of Windows yet. Version 3.0 was going to up the graphical ante with an exciting new 3D beveled design (which had first appeared with OS/2 1.2) and shiny new icons, and it would support Virtual 8086 mode on a 386, making it easier for people to spend more time in Windows and less time in DOS. It was going to be an exciting product, and Microsoft told IBM so.

IBM still saw Microsoft as a partner in the operating systems business, and it offered to help the smaller company by doing a full promotional rollout of Windows 3.0. But in exchange, IBM wanted to buy out the rights to the software itself, nullifying the DOS agreement that let Microsoft license to third parties. Bill Gates looked at this and thought about it carefully—and he decided to walk away from the deal.

IBM saw this as a betrayal and circulated internal memos that the company would no longer be writing any third-party applications for Windows. The separation was about to get nasty.

Unfortunately, Microsoft still had contractual obligations for developing OS/2. IBM, in a fit of pique, decided that it no longer needed the software company’s help. In an apt twist given the operating system’s name, the two companies decided to split OS/2 down the middle. At the time, this parting of the ways was compared to a divorce.

IBM would take over the development of OS/2 1.x, including the upcoming 1.3 release that was intended to lower RAM requirements. It would also take over the work that had already been done on OS/2 2.0, which was the long-awaited 32-bit rewrite. By this time, IBM finally bowed to the inevitable and admitted its flagship OS really needed to be detached from the 286 chip.

Microsoft would retain its existing rights to Windows, minus IBM’s marketing support, and the company would also take over the rights to develop OS/2 3.0. This was known internally as OS/2 NT, a pie-in-the-sky rewrite of the operating system that would have some unspecified “New Technology” in it and be really advanced and platform-independent. It might have seemed that IBM was happy to get rid of the new high-end variant of OS/2 given that it would also encroach on mainframe territory, but in fact IBM had high-end plans of its own.

OS/2 1.3 was released in 1991 to modest success, partly because RAM prices finally declined and the new version didn’t demand quite so much of it. However, by this time Windows 3 had taken off like a rocket. It looked a lot like OS/2 on the surface, but it cost less, took fewer resources, and didn’t have a funny kind-of-but-not-really tie-in to the PS/2 line of computers. Microsoft also aggressively courted the clone manufacturers with incredibly attractive bundling deals, putting Windows 3 on most new computers sold.

IBM was losing control of the PC industry all over again. The market hadn’t swung away from the clones, and it was Windows, not OS/2, that was the true successor to DOS. If the bear had been angry before, now it was outraged. It was going to fight Microsoft on its own turf, hoping to destroy the Windows upstart forever. The stage was set for an epic battle.

IBM had actually been working on OS/2 2.0 for a long time in conjunction with Microsoft, and a lot of code was already written by the time the two companies split up in 1990. This enabled IBM to release OS/2 2.0 in April of 1992, a month after Microsoft launched Windows 3.1. Game on.

OS/2 2.0 was a 32-bit operating system, but it still contained large portions of 16-bit code from its 1.x predecessors. The High Performance File System (HPFS) was one of the subsystems that was still 16-bit, along with many device drivers and the Graphics Engine that ran the GUI. Still, the things that needed to be in 32-bit code were, like the kernel and the memory manager.

IBM had also gone on a major shopping expedition for any kind of new technologies that might help make OS/2 fancier and shinier. It had partnered with Apple to work on next-generation OS technologies and licensed NeXTStep from Steve Jobs. While technology from these two platforms didn’t directly make it into OS/2, a portion of code from the Amiga did: IBM gave Commodore a license to its REXX scripting language in exchange for some Amiga technology and GUI ideas, and included them with OS/2 2.0.

At the time, the hottest industry buzzword was “object-oriented.” While object-oriented programming had been around for many years, it was just starting to gain traction on personal computers. IBM itself was a veteran of object-oriented technology, having developed its own Smalltalk implementation called Visual Age in the 1980s. So it made sense that IBM would want to trumpet OS/2 as being more object-oriented than anything else. The tricky part of this task is that object orientation is mostly an internal technical matter of how program code is constructed and isn’t visible by end users.

IBM decided to make the user interface of OS/2 2.0 behave in a manner that was “object oriented.” This project ended up being called the Workplace Shell, and it became, simultaneously, the number one feature that OS/2 fans both adored and despised.

As the default desktop of OS/2, 2.0 was rather plain and the icons weren’t especially striking, it was not immediately obvious what was new and different about the Workplace Shell. As soon as you started using it, however, you saw that it was very different from other GUIs. Right-clicking on any icon brought up a contextual menu, something that hadn’t been seen before. Icons were considered to be “objects,” and you could do things with them that were vaguely object-like. Drag an icon to the printer icon and it printed. Drag an icon to the shredder and it was deleted (yes, permanently!) There was a strange icon called “Templates” that you could open up and then “drag off” blank sheets that, if you clicked on them, would open up various applications (the Apple Lisa had done something similar in 1983). Was that object-y enough for OS/2? No. Not nearly enough.

Each folder window could have various things dragged to it, and they would have different actions. If you dragged in a color from the color palette, the folder would now have that background color. You could do the same with wallpaper bitmaps. And fonts. In fact, you could do all three and quickly change any folder to a hideous combination, and each folder could be differently styled in this fashion.

In practice, this was something you either did by accident and then didn’t know how to fix or did once to demo it to a friend and then never did it again. These kinds of features were flashy, but they took up a lot of memory, and computers in 1992 were still typically sold with 2MB or 4MB of RAM.

The minimum requirement of OS/2 2.0, as displayed on the box (and a heavy box it was, coming with no less than 21 3.5-inch floppy disks!), was 4MB of RAM. I once witnessed my local Egghead dealer trying to boot up OS/2 on a system with that much RAM. It wasn’t pretty. The operating system started thrashing to disk to swap out RAM before it was even finished booting. Then it would try to boot some more. And swap. And boot. And swap. It probably took over 10 minutes to get to a functional desktop, and guess what happened if you right-clicked a single icon? It swapped. Basically, OS/2 2.0 in this amount of RAM was unusable.

At 8MB the system worked as advertised, and at 16MB it would run comfortably without excessive thrashing. Fortunately, RAM was down to around $30 per MB by this time, so upgrading wasn’t as huge a deal as it was in the OS/2 1.x days. Still, it was a barrier to adoption, especially as Windows 3.1 ran happily in 2MB.

But Windows 3.1 was also a crash-happy, cooperative multitasking facade of an operating system with a strange, bifurcated user interface that only Bill Gates could love. OS/2 aspired to do something better. And in many ways, it did.

Despite the success of the original PC, IBM was never really a consumer company and never really understood marketing to individual people. The PS/2 launch, for example, was accompanied by an advertising push that featured the aging and somewhat befuddled cast of the 1970s TV series M*A*S*H.

This tone-deaf approach to marketing continued with OS/2. Exactly what was it, and how did it make your computer better? Was it enough to justify the extra cost of the OS and the RAM to run it well? Superior multitasking was one answer, but it was hard to understand the benefits by watching a long and boring shot of a man playing snooker. The choice of advertising spending was also somewhat curious. For years, IBM paid to sponsor the Fiesta Bowl, and it spent most of OS/2’s yearly ad budget on that one venue. Were college football fans really the best audience for multitasking operating systems?

Eventually IBM settled on a tagline for OS/2 2.0: “A better DOS than DOS, and a better Windows than Windows.” This was definitely true for the first claim and arguably true for the second. It was also a tagline that ultimately doomed the operating system.

OS/2 had the best DOS virtual machine ever seen at the time. It was so good that you could easily run DOS games fullscreen while multitasking in the background, and many games (like Wing Commander) even worked in a 320x200 window. OS/2’s DOS box was so good that you could run an entire copy of Windows inside it, and thanks to IBM’s separation agreement with Microsoft, each copy of OS/2 came bundled with something IBM called “Win-OS2.” It was essentially a free copy of Windows that ran either full-screen or windowed. If you had enough RAM, you could run each Windows app in a completely separate virtual machine running its own copy of Windows, so a single app crash wouldn’t take down any of the others.

This was a really cool feature, but it made it simple for GUI application developers to decide which operating system to support. OS/2 ran Windows apps really well out of the box, so they could just write a Windows app and both platforms would be able to run that app. On the other hand, writing a native OS/2 application was a lot of work for Windows developers. The underlying application programming interfaces (APIs) were very different between the two: Windows used a barebones set of APIs called Win16, while OS/2 had a more expansive set with the unwieldy name of Presentation Manager. The two differed in many ways, even in terms of whether you counted the number of pixels to position a window from the top or from the bottom of the screen.

Some companies did end up making native OS/2 Presentation Manager applications, but they were few and far between. IBM was one, of course, and it was joined by Lotus, who was still angry at Microsoft for its alleged efforts against the company in the past. Really, though, what angered Lotus (and others, like Corel) about Microsoft was the sudden success of Windows and the skyrocketing sales of Microsoft applications that ran on it: Word, Excel, and PowerPoint. In the DOS days, Microsoft made the operating system for PCs, but it was an also-ran in the application side of things. As the world shifted to Windows, Microsoft was pushing application developers aside. Writing apps for OS/2 was one way to fight back.

It was also an opening for startup companies who didn’t want to struggle against Microsoft for a share of the application pie. One of these companies was DeScribe, who made a very good word processor for OS/2 (that I once purchased with my own money on a student budget). For an aspiring writer, DeScribe offered a nice clean writing slate that supported long filenames. Word for Windows, like Windows itself, was still limited to eight characters.

Unfortunately, the tiny companies like DeScribe ended up doing a much better job with their applications than the established giants like Lotus and Corel. The OS/2 versions of 1-2-3 and Draw were slow, memory-hogging, and buggy. This put an even bigger wet blanket over the native OS/2 applications market. Why buy a native app when the Windows version ran faster and better and could run seamlessly in Win-OS2?

As things got more desperate on the native applications front, IBM even started paying developers to write OS/2 apps. (Borland was the biggest name in this effort.) This worked about as well as you might expect: Borland had no incentive to make its apps fast or bug-free, just to ship them as quickly as possible. They barely made a dent in the market.

Still, although OS/2’s native app situation was looking dire, the operating system itself was selling quite well, reaching one million sales and hitting many software best-seller charts. Many users became religious fanatics about how the operating system could transform the way you used your computer. And compared to Windows 3.1, it was indeed a transformation. But there was another shadow lurking on the horizon.

When faced with a bear attack, most people would run away. Microsoft’s reaction to IBM’s challenge was to run away, build a fort, then build a bigger fort, then build a giant metal fortress armed with automatic weapons and laser cannons.

In 1993, Microsoft released Windows for Workgroups 3.11, which bundled small business networking with a bunch of small fixes and improvements, including some 32-bit code. While it did not sell well immediately (a Microsoft manager once joked that the internal name for the product was "Windows for Warehouses"), it was a significant step forward for the product. Microsoft was also working on Windows 4.0, which was going to feature much more 32-bit code, a new user interface, and pre-emptive multitasking. It was codenamed Chicago.

Finally, and most importantly for the future of the company, Bill Gates hired the architect of the industrial-strength minicomputer operating system VMS and put him in charge of the OS/2 3.0 NT group. Dave Cutler’s first directive was to throw away all the old OS/2 code and start from scratch. The company wanted to build a high-performance, fault-tolerant, platform-independent, and fully networkable operating system. It would be known as Windows NT.

IBM was aware of Microsoft’s plans and started preparing a new major release of OS/2 aimed squarely at them. Windows 4.0 was experiencing several public delays, so IBM decided to take a friendly bear swipe at its opponent. The third beta of OS/2 3.0 (thankfully, now delivered on a CD-ROM) was emblazoned with the words “Arrive in Chicago earlier than expected.”

OS/2 version 3.0 would also come with a new name, and unlike codenames in the past, IBM decided to put it right on the box. It was to be called OS/2 Warp. Warp stood for "warp speed," and this was meant to evoke power and velocity. Unfortunately, IBM’s famous lawyers were asleep on the job and forgot to run this by Paramount, owners of the Star Trek license. It turns out that IBM would need permission to simulate even a generic “jump to warp speed” on advertising for a consumer product, and Paramount wouldn’t give it. IBM was in a quandary. The name was already public, and the company couldn’t use Warp in any sense related to spaceships. IBM had to settle for the more classic meaning of Warp—something bent or twisted. This, needless to say, isn’t exactly the impression you want to give for a new product. At the launch of OS/2 Warp in 1994, Patrick Stewart was supposed to be the master of ceremonies, but he backed down and IBM was forced to settle for Voyager captain Kate Mulgrew.

OS/2 Warp came in two versions: one with a blue spine on the box that contained a copy of Win-OS2 and one with a red spine that required the user to use the copy of Windows that they probably already had to run Windows applications. The red-spined box was considerably cheaper and became the best-selling version of OS/2 yet.

However, Chicago, now called Windows 95, was rapidly approaching, and it was going to be nothing but bad news for IBM. It would be easy to assume, but not entirely correct, that Windows won over OS/2 because of IBM’s poor marketing efforts. It would be somewhat more correct to assume that Windows won out because of Microsoft’s aggressive courting of the clone computer companies. But the brutal, painful truth, at least for an OS/2 zealot like me, was that Windows 95 was simply a better product.

For several months, I dual-booted both OS/2 Warp and a late beta of Windows 95 on the same computer: a 486 with 16MB of RAM. After extensive testing, I was forced to conclude that Windows 95, even in beta form, was faster and smoother. It also had better native applications and (this was the real kicker) crashed less often.

How could this be? OS/2 Warp was now a fully 32-bit operating system with memory protection and preemptive multitasking, whereas Windows 95 was still a horrible mutant hybrid of 16-bit Windows with 32-bit code. By all rights, OS/2 shouldn’t have crashed—ever. And yet it did. All the time.

Unfortunately, OS/2 had a crucial flaw in its design: a Synchronous Input Queue (SIQ). What this meant was that all messages to the GUI window server went through a single tollbooth. If any OS/2 native GUI app ever stopped servicing its window messages, the entire GUI would get stuck and the system froze. OK, technically the operating system was still running. Background tasks continued to execute just fine. You just couldn’t see them or interact with them or do anything, because the entire GUI was hung. Some enterprising OS/2 fan wrote an application that polled the joystick port and was supposed to unstick things when the user pressed a button. It rarely worked.

Ironically, if you never ran native OS/2 applications and just ran DOS and Windows apps in a VM, the operating system was much more stable.

OS/2’s fortune wasn’t helped by reports that users of IBM’s own Aptiva series had trouble installing it on their computers. IBM’s PC division also needed licenses from Microsoft to bundle Windows 95 with its systems, and Microsoft got quite petulant with its former partner, even demanding at one point that IBM stop all development on OS/2. IBM’s PC division ended up signing a license the same day that Windows 95 was released.

Microsoft really didn’t need to stoop to these levels. Windows 95 was a smash success, breaking all previous records for sales of operating systems. It changed the entire landscape of computing. Commodore and Atari were now out of the picture, and Apple was sent reeling by Windows 95’s success. IBM was in for the fight of its life, and its main weapon wasn’t up to snuff.

IBM wouldn’t give up the fight just yet, however. Big Blue had plans for taking back its rightful place at the head of the computing industry, and it was going to ally with everyone who wasn’t Microsoft if it could help it.

First up on its list of companies to crush: Intel. IBM, along with Sun, had been an early pioneer of a new type of microprocessor design called Reduced Instruction Set Computing (RISC). Basically, the idea was to cut out long and complicated instructions in favor of simpler tasks that could be done more quickly. IBM created a CPU called POWER (Performance Optimization With Enhanced RISC) and used it in its line of very expensive workstations.

IBM had already started a collaboration with Apple and Motorola to bring its groundbreaking POWER RISC processor technology to the desktop, and it used this influence to join Apple’s new operating system development project, which was then codenamed “Pink." The new OS venture was renamed Taligent, and the prospective kernel changed from an Apple-designed microkernel called Opus to a microkernel that IBM was developing for an even grander operating system that it named Workplace OS.

Workplace OS was to be the Ultimate Operating System, the OS to end all OSes. It would run on the Mach 3.0 microkernel developed at Carnegie Mellon University, and on top of that, the OS would run various “personalities,” including DOS, Windows, Macintosh, OS/400, AIX, and of course OS/2. It would run on every processor architecture under the sun, but it would mostly showcase the power of POWER. It would be all-singing and all-dancing.

And IBM never quite got around to finishing it.

Meanwhile, Dave Cutler’s team at Microsoft already shipped the first version of Windows NT (version 3.1) in July of 1993. It had higher resource requirements than OS/2, but it also did a lot more: it supported multiple CPUs, and it was multiplatform, ridiculously stable and fault-tolerant, fully 32-bit with an advanced 64-bit file system, and compatible with Windows applications. (It even had networking built in.) Windows NT 3.5 was released a year later, and a major new release with the Windows 95 user interface was planned for 1996. While Windows NT struggled to find a market in the early days of its life, it did everything it was advertised to do and ended up merging with the consumer Windows 9x series by 2001 with the release of Windows XP.

In the meantime, the PowerPC chip, which was based on IBM’s POWER designs (but was much cheaper), was released in partnership with Motorola and ended up saving Apple’s Macintosh division. However, plans to release consumer PowerPC machines to run other operating systems were perpetually delayed. One of the main problems was a lack of alternate operating systems. Taligent ran into development hell, was repositioned as a development environment, and was then canned completely. IBM hastily wrote an experimental port of OS/2 Warp for PowerPC, but abandoned it before it was finished. Workplace OS never got out of early alpha stages. Ironically, Windows NT was the only non-Macintosh consumer operating system to ship with PowerPC support. But the advantages of running a PowerPC system with Windows NT over an Intel system running Windows NT were few. The PowerPC chip was slightly faster, but it required native applications to be recompiled for its instruction set. Windows application vendors saw no reason to recompile their apps for a new platform, and most of them didn’t.

So to sum up: the new PowerPC was meant to take out Intel, but it didn’t do anything beyond saving the Macintosh. The new Workplace OS was meant to take out Windows NT, but IBM couldn’t finish it. And OS/2 was meant to take out Windows 95, but the exact opposite happened.

In 1996, IBM released OS/2 Warp 4, which included a revamped Workplace Shell, bundled Java and development tools, and a long-awaited fix for the Synchronous Input Queue. It wasn’t nearly enough. Sales of OS/2 dwindled while sales of Windows 95 continued to rise. IBM commissioned an internal study to reevaluate the commercial potential of OS/2 versus Windows, and the results weren’t pretty. The order came down from the top of the company: the OS/2 development lab in Boca Raton would be eliminated, Workplace OS would be killed, and over 1,300 people would lose their jobs. The Bear, beaten and bloodied, had left the field.

IBM would no longer develop new versions of OS/2, although it continued selling it until 2001. Who was buying it? Mostly banks, who were still wedded to IBM’s mainframes. The banks mostly used it in their automated teller machines, but Windows NT eventually took over this tiny market as well. After 2001, IBM stopped selling OS/2 directly and instead utilized Serenity Systems, one of its authorized business dealers, who rechristened the operating system as eComStation. You can still purchase eComStation today (some people do), but copies are very, very rare. Serenity Systems continues to release updates that add driver support for modern hardware, but the company is not actively developing the operating system itself. There simply isn’t enough demand to make such an enterprise profitable.

In December 2004, IBM announced that it was selling its entire PC division to the Chinese company Lenovo, marking the definitive end of a 23-year reign of selling personal computers. For nearly 10 of those 23 years, IBM tried in vain to replace the PC’s Microsoft-owned operating system with one of its own. Ultimately, it failed.

Many OS/2 fans petitioned IBM for years to free up the operating system’s code base to an open source license, but IBM has steadily refused. The company is probably unable to, as OS/2 still contains large chunks of proprietary code belonging to other companies—most significantly, Microsoft.

Most people who want to use OS/2 today are coming from a historical interest only, and their task is made more difficult by the fact that OS/2 has difficulty running under virtual machines such as VMWare. A Russian company was hired by a major bank in Moscow in the late 1990s to find a solution for their legacy OS/2 applications. It ended up writing its own virtual machine solution that became Parallels, a popular application that today allows Macintosh users to run Windows apps on OSX. In an eerie way, running Parallels today reminds me a lot of running Win-OS2 on OS/2 in the mid-1990s. Apple, perhaps wisely, has never bundled Parallels with its Mac computers.

So why did IBM fail so badly with OS/2? Why was Microsoft able to deftly cut IBM out of the picture and then beat it to death with Windows? And more importantly, are there any lessons from this story that might apply to hardware and software companies today?

IBM ignored the personal computer industry long enough that it was forced to rush out a PC design that was easy (and legal) to clone. Having done so, the company immediately wanted to put the genie back in the bottle and take the industry back from the copycats. When IBM announced the PS/2 and OS/2, many industry pundits seriously thought the company could do it.

Unfortunately, IBM was being pulled in two directions. The company's legacy mainframe division didn’t want any PCs that were too powerful, lest they take away the market for big iron. The PC division just wanted to sell lots of personal computers and didn’t care what it had to do in order to meet that goal. This fighting went back and forth, resulting in agonizing situations such as IBM’s own low-end Aptivas being unable to run OS/2 properly and the PC division promoting Windows instead.

IBM always thought that PCs would be best utilized as terminals that served the big mainframes it knew and loved. OS/2’s networking tools, available only in the Extended Edition, were mostly based on the assumption that PCs would connect to big iron servers that did the heavy lifting. This was a “top-down” approach to connecting computers together. In contrast, Microsoft approached networking from a “bottom-up” approach where the server was just another PC running Windows. As personal computing power grew and more robust versions of Windows like NT became available, this bottom-up approach became more and more viable. It's certainly much less expensive.

IBM also made a crucial error in promoting OS/2 as a “better DOS than DOS and a better Windows than Windows.” Having such amazing compatibility with other popular operating systems out of the box meant that the market for native OS/2 apps never had a chance to develop. Many people bought OS/2. Very few people bought OS/2 applications.

The book The Innovator’s Dilemma makes a very good case that big companies with dominant positions in legacy markets are institutionally incapable of shifting over to a new disruptive technology, even though those companies frequently invent said technologies themselves. IBM invented more computer technologies and holds more patents than any other computer company in history. Still, when push came to shove, it gave up the personal computer in favor of hanging on to the mainframe. IBM still sells mainframes today and makes good money doing so, but the company is no longer a force in personal computers.

Today, many people have observed that Microsoft is the new dominant force in legacy computing, with legacy redefined as a personal computer running Windows. The new disruptive force is smartphones and tablets, an area in which Apple and Google have become the new dominant forces. Microsoft, to its credit, responded as quickly as it was able to meet this new disruption. The company even re-designed its legacy user interface (the Windows desktop) to be more suited to tablets.

It could be argued that Microsoft was slow to act, just as IBM was. It could also be argued that Windows Phone and Surface tablets have failed to capture market share against iOS and Android in the same way that OS/2 failed to beat back Windows. However, there is one difference that separates Microsoft from most legacy companies: the company doesn’t give up. IBM threw in the towel on OS/2 and then on PCs in general. Microsoft is willing to spend as many billions as it takes in order to claw its way back to a position of power in the new mobile landscape. Microsoft still might not succeed, but for now at least, it's going to keep trying.

The second lesson of OS/2—to not be too compatible out of the box with rival operating systems—is a lesson that today’s phone and tablet makers should take seriously. Blackberry once touted that you could easily run Android apps on its BB10 operating system, but that ended up not helping the company at all. Alternative phone operating system vendors should think very carefully before building in Android app compatibility, lest they suffer the same fate as OS/2.

The story of OS/2 is now fading into the past. In today’s fast-paced computing environment, it may not seem particularly relevant. But it remains a story of how a giant, global mega-corporation tried to take on a young and feisty upstart and ended up retreating in utter defeat. Such stories are rare, and because of that rarity they become more precious. It’s important to remember that IBM was not the underdog. It had the resources, the technology, and the talent to crush the much smaller Microsoft. What it didn’t have was the will.

Profile

paserbyp: (Default)
paserbyp

June 2025

S M T W T F S
1 2 3 456 7
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated Jun. 7th, 2025 06:37 pm
Powered by Dreamwidth Studios