Index ¦ Archives ¦ RSS

APSW 3.37 is the last with Python 2 / early Python 3 support

This release of APSW (my Python wrapper for SQLite's C interface) is the last that will support Python 2, and earlier versions of Python 3 (before 3.7).

If you currently use APSW with Python 2/early Python 3, then you will want to pin the APSW version to 3.37. You will still be able to use this version of APSW with future versions of SQLite (supported till 2050). But new C level APIs won't be covered. The last C level API additions were 3.36 in June 2021 adding serialization and 3.37 in December 2021 adding autovacuum control

What does APSW support now ...

APSW supports every Python version 2.3 onwards (released 2003). It doesn't support earlier versions as there was no GIL API (needed for multi-threading support).

The downloads for the prebuilt Windows binary gives an idea of just how many Python versions that is (15). (Python 3.0 does actually work, but is missing a module used by the test suite.)

Many Python versions supported ...

Each release does involve building and testing all the combinations of 15 Python versions, 32 and 64 bit environment, and both UCS2 and UCS4 Unicode size for Python < 3.3, on multiple operating systems.

There are ~13k lines of C code making up APSW, with ~7k lines of Python code making up the test suite. It is that test suite that gives the confidence that all is working as intended.

... and why?

I wanted to make sure that APSW is the kind of module I would want to use. The most frustrating thing as a developer is that you want to change one thing (eg one library) and then find that forces you to change the versions of other components, or worse the runtime and dev tools (eg compiler).

I never made the guarantee, but it turned out to be:

You can change the APSW (and SQLite) versions, and nothing else. No other changes will be required and everything will continue to work, probably better.

This would apply to any project otherwise untouched since 2004!

There are two simple reasons:

  • Because I could - I do software development for a living, and not breaking things is a good idea (usually)
  • I would have to delete code that works

What happens next?

I am going to delete code that works, but it is mainly in blocks saying doing one thing for Python 2, another for early Python 3, and another for current Python 3.

My plan is to incrementally remove Python 2/early 3 code from the Python test suite and the C code base together, while updating documentation (only Python 3 types need to be mentioned). The test suite and coverage testing will hopefully catch any problems early.

I will be happy that the code base, testing, documentation, and tooling will all become smaller. That makes things less complex.

Other thoughts

The hardest part of porting APSW from Python 2 to 3 was the test suite which had to remain valid to both environments. For example it is easy to create invalid Unicode strings in Python 2 which I had to make sure the test suite checked.

It was about 10 times the amount of work making the Python test suite code changes, vs the C level API work. Python 3 wasn't that much different in terms of the C API (just some renaming and unification of int and long etc).

Category: misc – Tags: apsw, python


I scanned 3,768 photos and 2,799 slides

Does a physical photo you never look at really exist?

Our current devices and online services do a fantastic job of managing digital photos. There is face recognition, content recognition, maps, timelines etc. And it is all backed up in the cloud.

Meanwhile the physical photos languish inside a box, itself inside another box. All it takes is a few house moves over the years. There is a local company that will do scanning, but it is quite expensive and you still need to do most of the work yourself of extracting photos from albums, unsticking stacks of photos from each other, sorting out landscape from portrait orientation shots and more. The individual photos just aren't that valuable.

I also care about the physical to digital conversion parameters. For example what resolution do you scan the photos at? The higher the number the better the detail, the longer the scanning takes, the larger the file sizes become, and the detail may not actually be present in the print anyway. There are also all sorts of corrections for colour, blurring, dust, tone etc.

The reason I care is because I never want to do the scanning again! Consequently I pick high levels of fidelity, and almost no processing. The processing is very hard to undo when it makes a mistake. I also prefer capturing the photos as is, since that is how they do look now.

How hard could it be?

I briefly tried using mobile apps and phone to scan. That turns out not to be useful with terrible capture quality and apps I did not like. I resorted to a well reviewed flatbed scanner (surprisingly cheap) and a separate slide scanner.

The process itself is simple - load scanner, press buttons, wait, repeat. You do have to have focus of efficiency - an additional 1 second per item would add 2 hours to the total scanning time!

I finished my own photos quickly - all 200 of them. Then I volunteered to scan all family photos which is where the totals came from. That total is about a quarter of the size of my digital photos collection taken in the last two decades.

A small selection of slides and photos awaiting scanning

A small selection of slides and photos awaiting scanning

What did I learn?

It was fun. The photos themselves are like a time machine, with the oldest from 1907. People used to get dressed up in the olden days for photos! But just like other people's vacation photos (which many were), the pictures are mundane unless you were there. The backgrounds were interesting because they showed how things were then.

What surprised me the most was the sheer number of different print sizes. There was absolutely no consistency or standardisation at all. The scanner will scan multiple photos at once providing there is enough gap between them. Most of my time was spent fitting as many as possible onto the glass, like a real world tetris.

The colour reproduction was not what I expected. I had expected fading and yellowing, based on age. There was very little of that, and what there was had no age pattern.

There were a few non-photo items such as newspaper clippings, and two school report cards from the 1930s. They considered deportment the primary subject to grade!

One correlation I did note was the amount of notes on the back of photos and slides. The older they were, the more writing there was. By the 1970s there was usually nothing while the 1930s would have copious information about where, who, and why. Another was how many photos of an event there would be. For example a kid birthday party in the 1950's might have one picture. steadily increasing to 30 or more in 1990s.

Any tips?

Keep your fingers very dry! Any moisture (eg condensation from a cold drink you just had a sip of) will cause photos to stick together (even more), or to the scanner glass.

I put the photos/sides after scanning into batches of 100, separately bagging them with a numbered label corresponding to the folder name. This is to make it easier to go from the digital scan to finding the physical photo and copy any notes across.

Bonus Time Machine: Rare Historical Photographs

Sizes

Slides were all the same size, with the actual image size being in metric and the cardboard frame being in inches! (There were about 10 slides that were a different size.) I plotted photo sizes and how many at each size.

Photo sizes and count

The x axis is photo area, while the y axis is how many were at that size. Photos before the 90s usually had a white border, which the scanning software usually crops out. Sometimes it had writing, and in the 1950s would have the processing date.

Category: misc – Tags: photos


The aviation business as seen by a coder

A while ago I was flying across the Atlantic in a half full $250 million Boeing 747, wondering how it all worked financially. Multiplying the few hundred paid (round trip!) by passengers and 20 years didn't seem like it would pay for the plane let alone crews, fuel, maintenance and everything else. I even visited an airline once on business, and asked an employee during lunch break how an airline actually makes a profit. They did not know!

So here I am going to answer that, and also show the parallels to the software industry. The sources listed at the end include where I have picked up much of this information over the years.

Note

Unless otherwise stated, numbers given are for 2019. They are general ballparks for mainstream passenger airlines, with some variance throughout the industry, and US centric. The numbers for specific airlines and aircraft of interest to you are usually publicly available.

There are two primary parts to the business:

  • Operating an airline
  • Making planes

The software business often has companies that both make software for distribution to all, and then separately operate that software as a service. The aviation business has been separated for almost a century.

Operating an airline

Simple: You spend vast quantities of people, money, and time. You will also outsource a lot. In return you will get back slightly more than you spent. In numbers it may cost you 12.4 cents per seat per mile flown, and you get back 12.6 cents per seat per mile, averaged across your entire operation.

The single most profitable thing is flying a full load of paying passengers, the bigger the plane the better. Until the 1990s a load factor of 65% was considered good. These days 90%+ is the target and that is usually the break-even point for a low cost carrier. Not filling your plane will lose you money, the bigger the plane the worse the loss.

The good news is that most expenses are proportional to flying time. For example the flight crew, cabin crew, fuel, maintenance, ATC fees etc are based on flight hours. Those expenses scale up with the size of the plane. Planes are flown for 8 to 12 hours a day, with bigger numbers being preferable.

Aside: Paying for planes

You won't be shelling out $250 million for a 747. The list prices were always aspirational, just like in enterprise software sales. The planes become a monthly payment, with lessors handling turning the big price into smaller monthly ones.

Based on this posting you can get a rough idea of what it cost for new aircraft in 2019. The prices go a lot lower for used/older. The number of seats varies by airline (eg business class seats take more space, there may be more or less galley space depending on flight lengths, there are denser slimline seats and less dense more comfortable thicker seats). Each aircraft also has sub-models offering incremental seating capacity at an incremental price.

Aircraft Seats Price Monthly
A320 150 $44M $330K
B737 150 $47M $285K
A330 250 $82M $640K
B787 250 $119M $1M
A350 325 $148M $1.1M
B777 350 $155M $1.3M
A380 450 $230M $1.7M

With a software lens, it is also an enterprise sale. When someone spends tens of millions per plane, and usually buys many of them, there is a complex sales process. There are even legendary salesmen.

Making money

This is a giant optimization problem that plays out over months and years. You have to figure out what tradeoffs to make, and constantly update them while your competition do the same. Airlines have staff for which this is their job.

"Tightness"

You could schedule flights and turnaround time for the duration they usually take. But any delay then affects operations later in the day since aircraft and crews aren't where they should be causing cascading problems. Making things looser by adding padding gives more buffer should problems happen, but then those same planes and crews aren't making you any money. Worst case you may end up doing 3 flights a day with planes when you could have done 4, and your competitors may be doing 4, making a third more revenue and providing a better schedule.

Tightening things up is a great way of making the business more efficient, until events exceed your spare capacity (time, crews, planes, parts etc). That usually results in cancellations, irate passengers, and negative media coverage. The spare capacity has a cost too, especially as it isn't used most of the time.

Routes x Frequency

You want to serve as many places as possible to have a broad customer base. They won't want to split trips across multiple airlines. Travellers that are willing to pay more for tickets (eg business) also want more frequency so less of their time is spent waiting.

A common approach is to use smaller aircraft to feed passengers to larger hubs where they can be combined onto larger aircraft. But travellers willing to pay more want direct flights.

Fleet complexity

You can get aircraft for virtually any number of seats (eg 20 seat increments from 70 all the way to 550). That means you could operate the perfectly sized aircraft on each flight. Crew are certified for certain aircraft models, maintenance varies, engines vary and overall you become less able to make changes.

Some airlines avoid the complexity by only operating one type of aircraft which makes it far easier to move crews, maintenance, spares etc around as needed. Others embrace the complexity by being able to put the perfect aircraft on each route.

Fleet age

New aircraft are the most expensive to pay for and you'll have to work them hard to cover that. You do get to customize the cabin easily, making for a better onboard experience. Maintenance is also a lot less. (Replacing the worn out cabin in a 550 seat A380 costs about the same as a new 150 seat B737.)

Older aircraft are a lot cheaper, so it is easier to fly them only when it is worth it. But you'll have a more tired cabin. Maintenance costs also go up, and reliability will go down (a little). They will cost more to fly due to being less fuel efficient.

You'll notice some airlines that brag of youthful fleet get rid of planes at about 6 years old. That is when a heavy maintenance check (D Check) is done that involves taking almost the entire plane apart, checking everything, and putting it back together.

Different offerings

You will not succeed if you charge every passenger the same amount. The standard is to charge more the closer to departure. It is common to have different seating classes, but you need to get the ratios useful for the routes aircraft operate - eg you want business and economy class to be full, not just one and flying empty seats for the other.

The easiest is charging for things that don't require changing the aircraft, like food, priority boarding, wifi, baggage etc.

Outsourcing

You will never be able to handle everything yourself. For example if you operate one flight a day to an airport, then it won't make sense to have full time check in staff, full time maintenance, full time luggage handlers, full time cleaning staff etc.

Unless you have a lot of a certain aircraft, it won't make sense to do heavy maintenance yourself.

But outsourcing is more expensive - you are helping another company make money. The airlines outsource a lot of things to each other.

Bonus: Freight
A silver lining is carrying freight in the hold of passenger aircraft. About 90% of air freight used to be carried by passenger aircraft. They already fly where people go and have a timely schedule, so putting unused baggage space to work is pure gravy. It can also be what makes a flight that doesn't have a full passenger load still be profitable.

Making planes

Simple: You spend vast quantities of people, money, and time. You will also outsource a lot. In return you will get back more than you spent, eventually, if the aircraft programme is successful.

There is a lot involved - this series covers it, and it is only in part 17 where you are actually designing an aircraft.

Lines of code is a useful but very imperfect metric for software. (It does correlate with effort, complexity, bugs, functionality etc though). The equivalent for aircraft is weight, and that is how aircraft size is often measured. Weight has to be added to carry fuel (how far you fly), to contain passenger seats, and for aircraft elements like wings, landing gear, pressure vessel, catering etc. And more weight means more expensive to manufacture, design, purchase and operate.

The most important part is the manufacturing stage. The more you do something the better you get at it, improving efficiency. The standard way of measuring this is to compare the cost to produce unit number n with unit number 2n - for example 10 vs 20, 50 vs 100, 500 vs 1,000. Aircraft manufacturing is around 77%.

An example were estimates the first Boeing 787 cost $2 billion to make. The machines had to be made, machines to make those machines, staff trained, procedures worked out, mistakes detected and prevented in the future etc. It would take close to 1,000 planes manufactured at that 77% improvement before the manufacturing cost meets the sale price listed earlier!

It is a careful choice of how much of the design and manufacturing to outsource. It isn't feasible to do all of it yourself. Outsourcing to specialists reduces your effort, reduces your control, but they also expect to get more of the rewards. The Boeing 787 programme tried significantly increasing the amount of outsourcing, and is a good case study.

Aside: Engines

Engines are purchased separately from the aircraft. There is no standard fitting between the aircraft and the engine, and an airframe + engine combination is what is certified. You are stuck with the same engine model for the lifetime of a particular airframe.

The airlines prefer as much engine choice as possible, while engine manufacturers prefer as little competition as possible. They also have large investments to pay back. For example a deal with Boeing was made by GE to be an exclusive engine supplier for the Boeing 777-300ER model.

How does a new aircraft work?

To interest the airlines, you'll need to have a 15% fuel consumption improvement over what is currently available. Much of that improvment will come from improved engines, while the rest comes from improved materials (especially lighter ones) and aerodynamic tweaks (designing a new wing is very expensive and effective). It will however require actually designing and building the airframe and engines before the exact numbers are found.

Just like version 1.0 software will have "issues", the first aircraft off the production line will too, usually being overweight. (That reduces payload & range, and increases fuel consumption.) There is usually some sort of performance guarantee.

Those initial aircraft are going to have the most teething issues. And they are going to cost you the most to make, after having spent billions of dollars and many years. They will also have lower second hand values. That means launch customers will strike a hard bargain for the aircraft that cost you the most to make!

Iterating

Bugs and small improvements are going to be found. Service bulletins (improvements) and airworthiness directives (affecting safety) are issued. There is careful tracking of each airframe since they could be implemented early during production, or later during maintenance.

A group of improvements to the airframe and engines can be bundled together into a "performance improvement package" - good for a percent or two in reduced fuel consumption. That makes for an easy upsell to customers whose aircraft haven't been manufactured yet. It is rarely sensible to retrofit existing airframes.

The aircraft manufacturer is now in a good position to make some good money. Every successful aircraft has been stretched - putting an additional fuselage frames ahead and behind the wing (to maintain center of gravity) and making space for a few more seats. The goal is keep everything else similar - avoiding new crew training, different maintenance etc. The airlines like it too - if you are flying a route with 200 seats that you routinely fill then the same plane slightly longer and slightly heavier with the same crew and 220 seats makes things easy. As an example wikipedia lists the 747 derivatives doing just that.

Semver

While software has it's versioning, aircraft have a different convention. Using the Boeing 747 as an example:

747: Refers to all aircraft of this family

747-100: The first model produced. The second was 747-200 etc. It isn't always the case that the model starts at -100 - eg if the second is expected to be a smaller aircraft then models may start at -200 with a -100 coming later.

747-436: While the model is conventionally referred to as the -400, they are different for each customer. The -436 is what British Airways had because of their specific engine and other choices like how the cabin is configured. There needs to be plumbing for toilets, electrical power for the galleys, and often choices about space being used for bags or additional fuel tanks. One documentary I saw years ago explained how customers could choose whether the clipboard clasp on the captains controls could be at the side or on top. When spending millions, the customer gets to decide!

Rewriting from scratch

By far the easiest thing to do is keep tweaking existing models. It costs about $2bn and 3 years to update and certify a new engine, and you may even be able to get the engine manufacturer to pay for that. The Boeing 737 has been going since 1968! That compares to the $20bn and 10 years for a clean sheet design, if everything goes well.

Eventually it gets too difficult - making the airframe longer becomes impractical, or needs longer landing gear which needs larger landing gear bays which forces all the other belly components to move. The efficiency improvements are harder to come by since you've done it several times. Rewriting from scratch will fix all these and more, but you won't know for 10 years. It is a difficult complex decision, just as with software.

Good sources

Leeham News and Analysis
Excellent coverage of airlines and manufacturers, with good in depth analysis.
Skyships Eng
Wikipedia has good textual pages for aircraft. This Youtube channel has discussion and video of commercial aircraft history and operations.
Cranky Flier
Covers airline operations. During the pandemic Cranky has shown how the airlines kept updating their schedules and routes. There are also interviews with airline executives, and airport operators.
The Aviation Herald
Covers daily operational incidents world wide. You get an idea of how often there are bird strikes, engine issues, tail strikes etc happen (about 4 a day).

Category: misc – Tags: aviation


Metric Won!

It turns out the metric system has completely won, but some just haven't realised it yet! If you are in the US, and need a definitive ruling on distance, weight etc then you'd think that somewhere there is the golden definition of an inch, a gallon, a pound and so on. There is. They are defined in terms of the metric system.

To quote at 8m45s in:

The most ridiculous thing about all of this? Every single one of these imperial measurements are legally defined by the metric system. America is already using the metric system, and most of the population is oblivious to is.

The imperial system wasn't even sensible. The US uses feet, but also uses surveying feet which are finally becoming one. US and UK gallons remain gratuitously different, so miles per gallons don't translate.

The final frontier are recipes randomly switching amongst volumes and weights with imperial units (quick: what is the weight difference between a fluid ounce and an ounce of water or butter?)

Category: misc – Tags: metric


A decade of hindsight

I wrote a bunch of stuff over the last 10 years. Now that we know what happened, this is my look back. Followup is most recent posts first, and then getting older.

History podcasts did well, and keep getting better.

I liked the Casio Smartwatch, but WearOS and its apps aren't getting much development. (It was particularly frustrating when even Google didn't bother to keep their apps working, and some third party ones just stopped working one day.) It turns out the Apple Watch is also 50m water resistant, will also run for 30 days in power reserve mode, and can also be charged without taking the watch off. Those were the important base features of the Casio to me. Apple watch also has a 1,000 nit display (ie sunlight readable), and you get microphone, speakers, and NFC. Plus the rectangular display is better for those like me who prefer digital watchfaces with lots of information. Even Google's apps work better. I switched.

I keep wishing Emacs well. Language servers have made development environments easier. In the end I did abandon Atom in favour of Visual Studio Code. While vscode doesn't have tramp mode, the remote development is good enough.

I was very wrong about Mario Kart 8. It is a lot of fun, and Nintendo fixed many of the Wii version issues. We try the Wii version again every now and then, and it seems less fun than we remembered.

I had to switch from Nikola to Pelican. The main feature of Pelican is a far slower development pace, and not sucking in lots of dependencies. Nikola went full tilt adding many features quickly, but that made it hard to run infrequently since everything would be a lot harder to update. Additionally every time I ran it, there was a blizzard of messages about deprecations and configuration changes.

Support is still a problem. It is still usually treated as a cost centre, with incentives to do as little as possible. It is easier to support smaller numbers of customers who have paid more for a product, but offering supoort to large numbers of people cheaply doesn't seem to be done by anyone.

SSL was fixed by Lets Encrypt.

RSS is still around, but not as mainstream as the days of Google Reader. I still use it.

Self driving cars are still just around the corner, while there is more evidence of just how bad human drivers are.

I still have trouble with voice recognition. Most of the services do get it mostly right now, but when they get it wrong it is very wrong. Any other humans in the room usually also burst into laughter, due to what the service did. For example I may ask for a temperature conversion, and instead the service will start reading out some obscure fact.

Category: misc


Exit Review: Python 2 (and some related thoughts)

Python 2 has come to an end. I ported the last of my personal scripts to Python 3 a few months ago.

Perhaps the greatest feature of Python 2 was that after the first few releases, it stayed stable. Code ran and worked. New releases didn't break anything. It was predictable. And existing Python 2 code won't break for a long time.

The end of Python 2 has led to the end of that stability, which isn't a bad thing. Python 3 is now competing across a broader ecosystem of languages and environments trying to improve developer and runtime efficiency. Great!

I did see a quote that Python is generally the second best solution to any problem. That is a good summary, and shows why Python is so useful when you need to solve many different problems. Ii is also my review of Python 2.

So let's have some musings ...

Python has had poor timing. The first Python release (1994) was when unicode was being developed, so the second major Python version (2000) had to bolt on unicode support. But if it had waited a few more years, then things could have been simpler by going straight to utf8 (see also PEP 0538).

Every language has been adding async with Python 3 (2008) increasing support with each minor release. However like most other languages, functions ended up coloured. This will end up solved, almost certainly by having the runtime automagically doing the right thing.

Python 3 made a big mistake with the 2to3 tool. It works exactly as described. But it had the unfortunate effect of maintainers keeping their code in Python 2, and using that to make releases that supported both Python 2 and 3. The counter-example is javascript where tools provide the most recent syntax and transpiling to support older versions. Hopefully future Python migration tools will follow the same pattern so that code can be maintained in the most recent release, and transpiled to support older versions. This should also be the case for using the C API.

THe CPython C API is quite nice for a C based object API. Even the internal objects use it. It followed the standard pattern of the time with an object (structure) pointer and methods taking it as a parameter. There are also macros for "optimised access". But this style makes changing underlying implementation details difficult, as alternate Python interpeter implementations have found out. If for example a handle based API was used instead, then it would have been slower due to an indirection, but allow easier changing of implementation details.

Another mistake was not namespacing the third party package repository PyPI. Others have made the same mistake. For example when SourceForge was a thing, they did not use namespacing so the urls were sf.net/projectname - which then led to issues over who legitimately owned projectname. Github added namespaces so the urls are github.com/user/projectname. (user can also be an organization.) This means the same projectname can exist many times over. That makes forking really easy, and is perhaps one of the most important software freedoms.

Using NPM as an example, this is the only package that can be named database. It hasn't been updated in 6 years. On PyPI this is apsw and hasn't been updated in 5 years. (I am the apsw author updating it about quarterly but not the publisher on PyPI for reasons.) Go does use namespacing. A single namespace prevents forks (under the same name) and also makes name squatting very easy. Hopefully Python will figure out a nice solution.

Category: misc – Tags: exit review, python


Recommended: History of podcasts

I'm a fan of podcasts and especially longer form history podcasts. I've found that "History of" podcasts that cover various empires and locations seem to be rather good. The History of Rome podcast is a very good example, with many others following that format and principles. The format allows the shows to adapt over time, include listener feedback, and do experiments which often work well.

If you can't get enough, then Hardcore History has many good episodes and stories.

And at the meta level, there is a History of *History of podcast* podcasts

Category: misc – Tags: recommendation


My Casio Smartwatch WSD-F30 experience

Summary

The manual (pdf) is comprehensive and describes the non-WearOS functionality well. r/WearOS covers the WearOS side - check the sidebar too. It is also worth noting that current watches tend to use identical hardware (same qualcomm chipset, same screen resolution, same RAM, same storage etc) although extras like microphones, speakers, NFC differ.

Starting point

I've used Casio digital watches for as long as I can remember. Because they are water resistant, the watch can go anywhere I do, and I never take them off. My favourite over the last decade has been the Solar Atomic models. Solar means I never need to change the battery, and "atomic" means picking up the radio time signals that came from an atomic clock.

Smartwatch?

Watches provide two conveniences for me - it is always there, and I can look at it very quickly. Phones are in chargers, pockets, etc and take longer to extract and navigate to what you wanted to see.

Needing to be familiar with smartwatches, and to do development work I naturally picked the Casio offering which is upper mid-range in pricing.

First Time User Experience (software)

The FTUE is terrible. Android Wear WearOS watches are not mature yet, and require a lot of compromise to keep within available battery, cpu and software functionality. It feels a lot like being given a decade old phone and told to make it work now.

Simultaneously the watch will be doing system updates, installing or updating apps, and have some tutorial overlay you can't just dismiss. All the while you are learning the compromises you'll have to make.

To be clear - it is sluggish. There will be 5 seconds between taps and resulting actions. The screen will go black for several seconds while apps launch. You are never certain if touches or button presses registered, and often end up doubling them which makes things worse. I also found the onscreen keyboard useless since I could never touch the right spot.

Things do settle down over time, but that sluggishness still remains some of the time. What helped me the most was to enable developer options and turn on "Show Taps". That confirms a tap was registered and shows where is was, helping with feedback and making the keyboard more useful.

Charging

Charging is done with a magnetic attached cable. The box came with a small USB power brick, and the USB to round magnet charging cable. I have never used the supplied power brick, and have had no problem connecting to any USB power source. I also bought a third party USB C to magnet off Amazon, and use it the most of the time. In short the watch is not fussy about charging.

When sitting at my desk, the cable will stay in place providing there isn't too much unsupported cable length, so that is the main way I charge the watch.

Watch Display

  • A monochrome digital time display, easily readable in sunlight and difficult to read in low light. Uses a lot less power than the colour display. You can run in this mode for 30 days with WearOS turned off. When WearOS is running then only Casio apps can write to this screen (other apps just have the standard time display)
  • Ambient mode colour display (lowest brightness). Unreadable in direct or indirect sunlight. This is used when idle with power consumption based on how many pixels are not black.
  • Colour display which uses lots of power, is readable in indirect sunlight and generally impossible to read in direct sunlight.

If you have the full colour display on and are interacting with apps, a full battery will be drained in about an hour. Consequently much use of the watch is setting the display mode you want to trade off power consumption, readability, and response time.

You can have the display activated by touch, button press, and rotating your wrist. My experience of wrist activation is that it rarely works when you want it too, and often activates when you don't. Because it activates full brightness, the battery can be very quickly drained.

Thoughts

WearOS is a lot less mature than expected. It is unclear if Google is losing interest.

Most watch faces try to be pretty and based on analog hands. It is difficult to find dense digital displays.

The Casio apps do work well. I'm glad Casio used WearOS instead of doing their own operating system with limited apps etc. However the result including their gshock style case seems pricey. A few more years of new models should improve this.

Ultimately you figure out how to get the watch to work for you, requiring more administration than a non-smartwatch. For me the benefits outweigh the hassle. I use Theater Mode from quick settings to have the time showing most of the time.

Category: misc – Tags: review


On defaults

I've been wondering what best practise for handling defaults is. In software there are generally 3 values: zero, one, or many. As a consequence developers often pick a sensible number for "many", and allow configuration to change it.

Eventually defaults permeate the code, settings, user interfaces, product documentation, user forums, and search engine results. It spreads not from a single source of truth that tracks and propagates changes, but by being arbitrarily copied between systems.

As time passes, the default values need to change due to circumstances and experience. New features make existing values need refinement, while new interactions complicate matters.

The usual solution is to bump the major version and have humans, code, and documentation deal with changes. The effort of doing major version upgrades especially all the setting changes is what makes so many of us resistant to do major version upgrades.

Starting software after a version upgrade is always a pain. Sometimes you are pleasantly surprised that it just works, but usually the logs are full of complaints about settings, things that previously worked no longer working and general yak shaving.

Postfix has a compatibility level to help defer the effort after a major version upgrade, but you are still on the hook for the upgrade changes.

An anti-pattern is software that generates an initial config file for you. It does have a very short path between default settings and the generated config file, usually including comments and explanations in that file. This is fantastic to start with.

But it causes problems in time. The settings, comments and explanations become wrong. Looking at a config file that is a few years old is an exercise in archaeology and contradictions, requiring consulting the file, warning/error messages, logs, wikis, and other documentation.

So far the best I have is to prefer more 'automatic' settings, and keep the number of settings to a minimum.

Category: misc


Exit review: Emacs

A shocking time has come - I've given up Emacs, after using it for 20 years. When interviewing developers, one of the questions I ask is about their favourite editor. I don't care what the answer is, but I do very much care about why it is. An editor is a fundamental part of developer productivity, so I want to hear about the candidate caring about their own productivity and trying to improve it on an ongoing basis.

The irony is that I was using the same editor for decades. I did keep trying to find improvements, but never could. There are two sides to Emacs - one is as a competent & coherent editor, and the other is "living" in it. It has builtin web browsing, image viewing, email and news support, terminal emulators etc. I was never one of those.

Before Emacs I used vi. Its modal interface, small size, and availability on all systems make it a good tool. However it was text console only, and didn't do colour, menus, multiple files or other useful functionality. (It does now.) vi does have a learning curve - I estimate it takes about 4 years to be good with it, and 8 years to be an expert!

I had known about Emacs for a while, but it was text console only, and didn't do colour, or menus. Each attempt to use it left me frustrated with what amounts to another arbitrary set of keystrokes. (I've always been a cross platform person so I was also juggling keystrokes for other operating systems and applications.) A colleague (hi Jules) introduced several of us to XEmacs around 1995. It had a gui, and colour, and most importantly a menu system. It was no longer necessary to memorize a large set of new keystrokes, as the menus showed them. You could do everything without knowing any, and then pick up those you use often enough.

By the mid 2000s XEmacs was languishing, and Emacs was slowly catching up with the gui. More and more packages only worked with regular Emacs (there were small but growing incompatibilities). I eventually made the switch from XEmacs to regular Emacs.

There was an explosion in different file types I was editing: Python, C, Javascript, Java, Objective-C, HTML, HTML with Jinja Templates, JSON, matlab, CSS, build scripts, SQL, and many more I have forgotten. Emacs had support for most. Support means syntax highlighting, indenting, jumping around notable symbols etc. More packages were produced that did linting (looking for common errors), and various other useful productivity enhancements.

At the same time a new editor Sublime Text was introduced. It had fantastic new interaction (goto anything, projects, command palettes, multiple selections, distraction free) and a rich package system (written in Python - yay!) I kept trying it, but kept finding issues that affected me. Development also seemed to drastically slow, and since it was closed source there was no way for others to improve and update the core.

Meanwhile Emacs became more and more frustrating. The web (HTML, Javascript, CSS) is not a first class citizen. Not many packages were distributed with the core, and you had to copy crytic elisp code from various places, or use strange tools to try and get them installed and kept up to date. Then you had to do that on each machine. Heck the package repositories (eg MELPA) didn't even use SSL by default! My emacs configuration file kept getting longer and longer.

Ultimately tools these days are defined by their vibrant community, useful defaults, and easy to use extension mechanisms. Emacs has all those, especially in the past. But they are of a different era and different cadence.

I have switched to Atom. It had a rough initial exposure with performance problems, and the extremely dubious choice of being closed source. However both have been addressed. Just days before Atom 1.2 was released, I removed Emacs in favour of Atom 1.1. My configuration file is 10 lines long, and I get the same experience on every machine.

Category: misc – Tags: exit review

Contact me