Mini Museum: TRS-80 Model 100

A Model 100 laptop, from the early 1980s.
(I’d update its date to 2023, but it wouldn’t believe me.)

The TRS-80 Model 100 was one of the first popular “laptop” style computers. Offered for sale in 1983, it was a revolutionary device. Users could choose from built-in BASIC as well as rudimentary word processor, scheduler, address book, and terminal apps. It spoke a surprisingly mainstream dialect of BASIC for the day — a lot like IBM BASICA available on the first DOS machines.

Power was provided from either a wall-wart adapter or car adapter, or four AA batteries, which would actually run the machine for many hours. (Radio Shack claimed about 20 hours runtime, and I found that believable.) A memory backup battery was also provided, so if the AA batteries ran out or were removed, you didn’t lose your data.

Program storage was either the 22KB of battery-backed-up memory, shared between program storage and RAM, or “bulk storage” via audio transfer to cassette tape. There was an obscure “Disk/Video Adapter” (the size of a small PC) that could connect to the Model 100 via the expansion port on the bottom, and allowed it to read and write low-density 5.25″ diskettes. It was more reliable than the tape option — but then again, so was almost literally anything else.

There was a built-in direct-connect modem, and with the addition of a phone adapter cable, the Model 100 could dial in to a server. Dial up BBSes (Bulletin Board Systems) were the “Internet” of the day; most of the menus were designed for full-size CRT terminals and didn’t look good on the Model 100’s small LCD screen — but the “killer app” for these was the ability for journalists to call in and directly upload draft articles straight from on-assignment locations, using only a phone line. (The alternative tech was to use a fax machine and have someone re-type the article.)

One thing that the Model 100 was great at was RS232 serial communication. Back in the day when 9,600 baud was the standard that all the cool, high-speed kids used, the ability to do serial comms at any speed from 75 baud to 19,200 baud was amazing. And the Model 100 could do this with 6, 7, or 8 data bits, one or two stop bits, with parity bit options of Mark, Space, Even, Odd, or None. (That’s literally every option mathematically possible, for those keeping score.) If it had a serial port, the M100 could talk to it.

One serial device that I got it to talk to was an old LORAN-C receiver that I picked up for cheap at a hamfest. LORAN-C was a wide-area navigation system from back before GPS and friends were available to non-military users. LORAN (short for LOng-RAnge Navigation) used time/phase differences in groundwave signals sent from sites around the US to determine the user’s position. Finding the time difference between two sites resulted in a hyperbolic line of location. With two (or ideally more) pairs of stations, the position could be determined by the intersection of these positions.

The Model 100’s BASIC turned out to be fast enough to listen to the data provided at NEMA 4800 baud from the LORAN, compress it using a keyframe-plus-deltas scheme I came up with, and store it in its memory. 22KB of memory — not enough for a modern app’s icon — was enough to record some eight hours of position data, with a latitude/longitude fix every six seconds (which was all the LORAN would do.)

I used it to track position data on trips between home (northern VA) and college (Norfolk). On a whim, I did a complete cloverleaf interchange tour on one trip home — taking each of the four 270-degree ramps and continuing on in the original direction. When I got to school and downloaded and plotted the data, the cloverleaf was visible, if somewhat distorted (even driving slowly, a car moves quite some distance in six seconds.)

The 1980s may not have had VR, ChatGPT, or even the Internet as we know it — but we did a lot with almost nothing.

Posted in BASIC, Drexel, Mini Museum, Nostalgia, Tools, Toys | Leave a comment

Components: Inductors

Various inductors (chokes and transformers). Ruler and transit token for scale.

An inductor is probably the simplest electronic component. It is literally a wire, bent into a coil around a core of either air or another substance. Although all real inductors have some resistance (unless made of superconductors), inductors’ usefulness results from their electromagnetic properties — both as electromagnets and in electronic circuits.

Just as a resistor has a characteristic law — Ohm’s Law — that it follows, inductors follow a similar law. (V = L*di/dt) The difference is, inductors develop a voltage proportional to the change in current per unit of time — not the flow of current itself. This property is at the heart of many interesting, useful circuits that make use of inductors:

  • Chokes: Most simply, inductors are used to block high-frequency signals and pass lower-frequency ones. So a RL filter, with the inductor in series, would be a low-pass filter. With the inductor in parallel with the load, it would be a high-pass filter.
  • Resonant circuits: An inductor/capacitor pair can form a resonant circuit, suitable for tuning in radio signals of a particular frequency, or generating AC signals as part of an oscillator.
  • Voltage multiplication: Inductors are often used as the energy-storage elements of switched-mode (buck and/or boost) power supplies. If current flowing through an inductor is suddenly blocked, a high voltage appears across the inductor, which can be collected in a capacitor.
  • Transformers: Two magnetically-coupled coils can convert high voltage to high current or vice-versa.

Basic model

The characteristic equation for inductors is V = L*di/dt. Voltage across the inductor is proportional to its inductance in henries, times the time derivative of current (in amps per second, which sounds like a natural unit, but measures rate of change of the flow of charge.)

So, inductors oppose changes in current flow. The voltage across an inductor, by Lenz’s Law, is in the polarity which will oppose the change in current flow. If no current is flowing, the voltage will oppose current flow. If the current is decreasing, the energy in the inductor will act to keep the current flowing.

At DC, inductors are simply wire, and have resistance but no reactance, since di/dt is zero. With AC, reactance is proportional to frequency — Xc = 2*pi*f*L.

Posted in Components, Electronics | Tagged , , , , , , | Leave a comment

TTGO T-Display Deep Sleep

Ten milliamps doesn’t sound like a lot — but for sleep/standby mode, it is. A recent project was intended for occasional use as a desk toy, and would spend most of its time sleeping. Running the recommended

esp32_deep_sleep_start();

took the current from ~70mA with the system running at full speed with the display on, to ~10mA. That’s over an 80% savings, but would still drain the gadget’s 1000mAh battery in just a few days.

Trying various different cures recommended by various sites including Espressif’s own had little effect, until I stumbled across a Reddit post by [infuriatingPixels]. The following four lines of code took the project’s power consumption from ~10mA to ~380uA in standby — a more than 20x additional savings. And it can still do wake-on-touch.

pinMode(4,OUTPUT); 

digitalWrite(4,LOW); // Should force backlight off

tft.writecommand(ST7789_DISPOFF);// Switch off the display

tft.writecommand(ST7789_SLPIN);// Sleep the display driver

Thanks, Random Internet Person!

Posted in Arduino, C, Coding, Digital, HOW-TO, Power | Leave a comment

ChatGPT

Technology has provided us with a lot of fascinating new toys in recent decades — search engines, smartphones, gyroscope-stabilized quadcopter drones with GPS capability, not to mention the Internet and near-gigabit speeds to the home.

Every so often, one of these technologies seems like an almost magical breakthrough. Search engines, to an extent, fit this description: they can search through the unbelievably large amount of data online and return mostly-relevant results for most common queries.

ChatGPT and “large language models” are the latest technology to feel like magic — and, in fact, have convinced me that maybe we really are approaching the technological Singularity. ChatGPT is capable of responding to natural-language queries in English (or French, or Spanish) and replying with generally relevant, useful content — rendered in beautiful, correct English grammar.

This, by itself, would be impressive. But ChatGPT’s language skills extend to computer languages like C and BASIC, as well. If asked to write an implementation of Bubble Sort in C, it does so, and also provides a paragraph explaining how its code works. Okay, I thought, that’s a neat trick, but it would be easy enough to listen for the correct terms and then come up with a stock response.

So at a friend’s prompting, I posed it a more difficult task: Write a program in BASIC (which most AI researchers are probably not using) to compute and display images of the Mandelbrot Set. This is my go-to task when learning any new computer language with graphics capability; I’ve been writing it in various languages since the late ’80s, and know the algorithm well.

Its first attempt at the program almost worked — and might have worked if fed to an old-school IBM BC running 1980s-era BASICA. ChatGPT knew that it had to set up two integer FOR loops to iterate over the field, scale these appropriately to produce an image of the relevant part of the complex plane, then implement a simple complex-number multiply-and-add scheme — in a language that doesn’t natively have complex numbers.

Again, this could have been a stock response, except for the fact that it can accept criticism and update the program accordingly. Looking it over, I noticed it using single-precision floating point, as well as inadvertently using two BASIC keywords: “WIDTH” and “CONTINUE.” It also used line numbers, which aren’t supported in modern dialects like FreeBASIC.

I pointed these issues out, as well as providing exactly two lines of code (one to declare the variables, and another to keep the screen from closing at the end), and ChatGPT re-wrote the program (far, FAR faster than any human could), with the errors corrected. It worked, and produced a black-and-white image of the Mandelbrot Set.

I then asked it to re-write the program to color the iteration levels. I didn’t specify what colors to use, how to do this, or anything. It just understood the request and did it.

While ChatGPT does not even attempt to pass the Turing test (it claims that it is a simple language model and not self-aware), it’s still the closest candidate I’ve ever seen. This is intelligence here — not quite human or aware, but still capable of complex, “intelligent” processing of ideas.

But don’t take my word for it. Here’s what I got — unedited other than making the title bold — when I asked ChatGPT to peruse the articles on this site and write a blog post about ChatGPT and large language models, in the same style:

ChatGPT and the Rise of Large Language Models

In recent years, there has been a lot of buzz surrounding chatbots and natural language processing (NLP). One particularly interesting development in this field is ChatGPT, a chatbot powered by a large language model called GPT-3 (Generative Pre-trained Transformer 3).

But what exactly is a large language model, and how does it work? Simply put, a language model is a type of machine learning model that is trained to predict the likelihood of a sequence of words. The more data the model is trained on, the more accurate it becomes at predicting the next word in a sequence.

GPT-3 is an example of a large language model, with 175 billion parameters (the values that the model is trying to optimize). This makes it one of the largest language models ever created, and allows it to perform a wide range of natural language tasks with impressive accuracy.

But what makes ChatGPT unique is its ability to engage in conversation with users. By feeding the chatbot a prompt and allowing it to generate responses based on its understanding of language, ChatGPT can hold conversations with users on a variety of topics.

One of the most impressive aspects of ChatGPT is its ability to understand context and maintain coherence in a conversation. For example, if a user asks a question and the chatbot responds with an answer, it will be able to understand that the conversation is still ongoing and respond appropriately to follow-up questions.

Of course, ChatGPT and other large language models are not without their limitations. While they can perform a wide range of tasks with impressive accuracy, they are not capable of understanding and replicating the subtleties of human communication, such as irony or sarcasm.

Despite these limitations, the capabilities of large language models like GPT-3 and ChatGPT are nothing short of impressive. As natural language processing technology continues to advance, it will be interesting to see how these chatbots and other language models will be used in the future.

— ChatGPT, Dec. 15 2022 version (accessed 2022-12-16)

So while it’s not human, it’s a great writer and absolulely more competent at coding than a typical student. The next ten to twenty years are going to be simply fascinating. Buckle up!

Posted in Algorithms, BASIC, C, Current Events, Digital Citizenship | Leave a comment