ChatGPT

Technology has provided us with a lot of fascinating new toys in recent decades — search engines, smartphones, gyroscope-stabilized quadcopter drones with GPS capability, not to mention the Internet and near-gigabit speeds to the home.

Every so often, one of these technologies seems like an almost magical breakthrough. Search engines, to an extent, fit this description: they can search through the unbelievably large amount of data online and return mostly-relevant results for most common queries.

ChatGPT and “large language models” are the latest technology to feel like magic — and, in fact, have convinced me that maybe we really are approaching the technological Singularity. ChatGPT is capable of responding to natural-language queries in English (or French, or Spanish) and replying with generally relevant, useful content — rendered in beautiful, correct English grammar.

This, by itself, would be impressive. But ChatGPT’s language skills extend to computer languages like C and BASIC, as well. If asked to write an implementation of Bubble Sort in C, it does so, and also provides a paragraph explaining how its code works. Okay, I thought, that’s a neat trick, but it would be easy enough to listen for the correct terms and then come up with a stock response.

So at a friend’s prompting, I posed it a more difficult task: Write a program in BASIC (which most AI researchers are probably not using) to compute and display images of the Mandelbrot Set. This is my go-to task when learning any new computer language with graphics capability; I’ve been writing it in various languages since the late ’80s, and know the algorithm well.

Its first attempt at the program almost worked — and might have worked if fed to an old-school IBM BC running 1980s-era BASICA. ChatGPT knew that it had to set up two integer FOR loops to iterate over the field, scale these appropriately to produce an image of the relevant part of the complex plane, then implement a simple complex-number multiply-and-add scheme — in a language that doesn’t natively have complex numbers.

Again, this could have been a stock response, except for the fact that it can accept criticism and update the program accordingly. Looking it over, I noticed it using single-precision floating point, as well as inadvertently using two BASIC keywords: “WIDTH” and “CONTINUE.” It also used line numbers, which aren’t supported in modern dialects like FreeBASIC.

I pointed these issues out, as well as providing exactly two lines of code (one to declare the variables, and another to keep the screen from closing at the end), and ChatGPT re-wrote the program (far, FAR faster than any human could), with the errors corrected. It worked, and produced a black-and-white image of the Mandelbrot Set.

I then asked it to re-write the program to color the iteration levels. I didn’t specify what colors to use, how to do this, or anything. It just understood the request and did it.

While ChatGPT does not even attempt to pass the Turing test (it claims that it is a simple language model and not self-aware), it’s still the closest candidate I’ve ever seen. This is intelligence here — not quite human or aware, but still capable of complex, “intelligent” processing of ideas.

But don’t take my word for it. Here’s what I got — unedited other than making the title bold — when I asked ChatGPT to peruse the articles on this site and write a blog post about ChatGPT and large language models, in the same style:

ChatGPT and the Rise of Large Language Models

In recent years, there has been a lot of buzz surrounding chatbots and natural language processing (NLP). One particularly interesting development in this field is ChatGPT, a chatbot powered by a large language model called GPT-3 (Generative Pre-trained Transformer 3).

But what exactly is a large language model, and how does it work? Simply put, a language model is a type of machine learning model that is trained to predict the likelihood of a sequence of words. The more data the model is trained on, the more accurate it becomes at predicting the next word in a sequence.

GPT-3 is an example of a large language model, with 175 billion parameters (the values that the model is trying to optimize). This makes it one of the largest language models ever created, and allows it to perform a wide range of natural language tasks with impressive accuracy.

But what makes ChatGPT unique is its ability to engage in conversation with users. By feeding the chatbot a prompt and allowing it to generate responses based on its understanding of language, ChatGPT can hold conversations with users on a variety of topics.

One of the most impressive aspects of ChatGPT is its ability to understand context and maintain coherence in a conversation. For example, if a user asks a question and the chatbot responds with an answer, it will be able to understand that the conversation is still ongoing and respond appropriately to follow-up questions.

Of course, ChatGPT and other large language models are not without their limitations. While they can perform a wide range of tasks with impressive accuracy, they are not capable of understanding and replicating the subtleties of human communication, such as irony or sarcasm.

Despite these limitations, the capabilities of large language models like GPT-3 and ChatGPT are nothing short of impressive. As natural language processing technology continues to advance, it will be interesting to see how these chatbots and other language models will be used in the future.

— ChatGPT, Dec. 15 2022 version (accessed 2022-12-16)

So while it’s not human, it’s a great writer and absolulely more competent at coding than a typical student. The next ten to twenty years are going to be simply fascinating. Buckle up!

Posted in Algorithms, BASIC, C, Current Events, Digital Citizenship | Leave a comment

Meigs Field

Okay, Microsoft and Asobo get some karma points for this one.

Along with several other neat updates such as a study-level Airbus A310 and other historic aircraft, Flight Simulator has now, albeit virtually, corrected an ancient wrong. KCGX — the storied Meigs Field of old — has been added to the sim.

Hello, old friend…

For those of us ancient enough to have known the very earliest versions of MSFS, runway 36 at Meigs was the default starting point. The airport, blocks from downtown Chicago yet easy enough for novice pilots to fly into, was a great, iconic introduction to flying.

The real airport was just as inspiring. I was fortunate enough to be able to visit it before its runway was bulldozed in the middle of the night by then-mayor Daley. I wish I was kidding.

The tower and service buildings don’t seem to resemble the two-story, mid-20th-century building with the glass façade that I remember. So I’ll think of it as Meigs reborn, the way it could be.

Next, I think I’ll try the famous “Checkerboard Hill” approach into Hong Kong’s Kai Tak airport. They brought that one back, too — but for safety’s sake, let’s leave that one in VR.

Posted in Aviation, Digital, Flight Simulator, Games, Nostalgia | Tagged , , , , , , , | Leave a comment

Mini-Museum of Computing History

The first few exhibits in the Museum. (Click for larger.)

Welcome to the Mini Museum of Computing History (located on the first floor of Drexel’s University Crossings building, near the east entrance.)

Current exhibits include the following (click on links for more information):

Planned future exhibits include:

  • History of x86 processors;
  • Magnetic core memory;
  • Vacuum tube / transistor / integrated circuit comparison;
  • Moore mechanical calculator (hopefully working);
  • History of microcontrollers;
  • etc.

In keeping with the Museum’s theme, descriptions of the exhibits are hosted on this site, accessible by QR codes posted next to each exhibit. Scan each exhibit’s QR code for the relevant article.

Posted in Drexel, Mini Museum, Nostalgia | Tagged , , , , , , , , , , , , , , | Leave a comment

Mini Museum: Punched Cards

Data storage, even on electronic computers, wasn’t always done electronically or even electromagnetically. In the mid 20th century, paper card stock was used to store information by the presence or absence of punched holes in the cards. Although the storage density of such media (one bit per punch position) is orders of magnitude less than modern devices like SD cards (or even floppy disks), one benefit is that the data can be manually created. The only tool required is basically a sharp stick.

With such a tool, programmers could make their own cards without needing to sit down and use a full-size card punch. While the handheld tool probably wouldn’t be the best choice for coding up an operating system, it could allow data collection out in the field. Ask your questions and record your responses on punched cards, ready to be fed into the computer.

Automatic card and tape punches even had a “bit bucket,” where punched-out bits would go to be discarded. (This was before recycling; at least paper is biodegradable.)

…What won’t they think of next?

Posted in Coding, Digital, Mini Museum, Nostalgia | Tagged , , , , , | Leave a comment