Terkwood Farms

This notebook contains items primarily relating to computer technology.

About the Author

I am an independent software producer specialized in data and distributed systems.

I have made various open source works available on Github.

BUGOUT: Play Go against AI πŸ’ͺ and Human Friends πŸ‘ͺ

BUGOUT provides a web interface to KataGo, a leading, community-driven AI skilled in the classic board game, Go.

BUGOUT also features a multiplayer option for playing against human friends, if you're lucky enough to have them.

BUGOUT's web interface is a derivative of Sabaki, the much-loved desktop app which is already capable of hooking into KataGo, Leela Zero, and GNU Go.

Source Code for the Project

The project is open-sourced, currently under MIT license. It is available on Github.

Emotional Backdrop

I started playing GO sometime during high school. Several of my friends also learned to play. We would often play chess, for which I had a neat little carrying case with a simple game clock, a rollout board, and sturdy pieces. I came into a possession of a GO board of standard dimensions, and used plastic pieces that were just heavy enough to feel like they mattered.

That Good, Old Board

We would sometimes meet at a Chinese dive on West 10th Street in Indianapolis, consume delicious, sugary food, and play.

GO felt different than chess. There was a sense of immense possibility. There was time to think about the future. Winning was hard. Getting your skill up with a same-stage newbie was exhilarating, because you grew together.

Eventually I got my board signed by VICTOR WOOTEN while attending a concert performance given by BΓ©la Fleck and the Flecktones at the Indiana Roof Ballroom: we had been playing some small rounds prior to the start of the concert.

It must have been... 1999? Victor Wooten was super gracious, and inspired everyone to a wholesome life of technical mastery. I was young. I wore an undersized fedora.

The music was phenomenal, and Wooten's signature fueled my fire.

That Good Wooten

In college I dreamt that one day I would wake to the sound of an obstreperous trumpet, having been mired in games and strenuous efforts to build some sort of ultimate strategy. I dreamt that I would wake to a vision of clear dawn.

But the actual time I've spent playing GO in my life is tiny. Life goes on, and its daily demands obscure our roots.

Still, the aesthetic delight of GO has been enjoyed by millions of individuals over the centuries, and it's this very beauty which has inspired my work over the last nine months.

GO is a game which helps me combat the lassitude of experience and boredom.

GO is not a game which must be won. It's enough to face a player whose skill exceeds yours, and to learn. When I play 9x9 against KataGo running on a 5W computer, I'm happy if I manage not to lose an embarrassing number of pieces.

GO keeps me sharp.

Building the Project

BUGOUT started out as an excuse to implement something (anything) using Kafka. My previous two professional engagements had barely skirted opportunities to legitimately include a Kafka install: one was simple enough to rely on HTTP microservices, another had opted for Kinesis.

Fresh into successful semi-retirement, I didn't want to bother with finding a use case for my implementation. I wanted what I couldn't have reasonably argued for in past, commercial settings. I wanted to play with Kafka Streams!

It turns out that mapping the tiny domain model of a GO game to kafka streams is... easy. Initially BUGOUT was envisioned as a multiplayer GO board which could easily run in any modern browser and provide an enjoyable, boutique alternative to more popular servers and venues. I wanted to play again with my old friends, just like when we were kids. So the project quickly incorporated a game lobby system which allowed players to choose between joining a quick 19x19 game with the next person to visit the web page, or creating a nicely-formatted URL (https://example.com/?join=nXblGBE7erWyocXYYpRN1YOzdD) for their private game and sharing it with a friend.

Choosing a Venue

Board Size Options

Turn Order Preference

Sharing a Link to a Game

Joining a Link to a Game

Finally, both players can be fairly assigned a side, and the game can begin.

Your Color

Finally, A Game

It basically worked. I spent time making sure the websocket connection between the browser and the BUGOUT gateway server was solid. I had to guarantee that Kafka and all its dependent apps started up in an orderly fashion. And I enjoyed writing functional-ish Kafka Streams code in Kotlin: from a cognitive perspective, it felt clean and tidy, even if the topology graphs quickly got out of hand.

Game Lobby Message Processing: This is Getting Insane

Too Cheap for Pro Tier

It's pretty obvious to anyone that spends 30 minutes in the Kafka literature, that a production deployment of Kafka involves at least three machines. I was running a single node, because my average user load was zero. What's more, I was paying for AWS compute time using a personal credit card, so I've never been exactly fired up about running an individual host that comes anywhere near the specs recommended by the vendor.

Some additional reading turned up that 6GB of RAM is a reasonable minimum for any single host:

In most cases, Kafka can run optimally with 6 GB of RAM for heap space. For especially heavy production loads, use machines with 32 GB or more. Extra RAM will be used to bolster OS page cache and improve client throughput. While Kafka can run with less RAM, its ability to handle load is hampered when less memory is available.

Ever the miser, I settled on running a t3.medium compute host for my degenerate Kafka cluster with its single, lonely node. 4GB of RAM certainly worked well enough for my theoretically scalable backend -- at least as long as it didn't become popular.

Anyway, who cares about vendor reccomended minima or a realistic-looking deployment? I was liberated from corporate hierarchy, profit motive, etc, and was ready to experience the raw freedom that I'd always dreamed about as a junior developer. I was ready to play with toys.

The only problem was that running even a t3.medium around the clock cost about $1/day.

πŸ€‘ That's expensive! πŸ€‘

Turning Kafka On and Off with Rust + AWS

The windmills were ready to tilt.

I spent a non-trivial amount of effort using rusoto to control the startup and shutdown of my "very expensive" t3.medium instance. Whenever the system detects that there are no users connected, a timer starts, and after a few minutes, it shuts down the t3.medium running Kafka, and all of the memory-hogging Kafka Streams apps supporting gameplay.

Whenever someone comes back to the website and wants to play a game, the system loyally boots the Kafka host.

And that's great. I can still afford to eat. Working with rusoto was painless. But the user experience of waiting for BUGOUT's Kafka backend to initialize is... abysmal. It takes about 90 seconds for the EC2 instance and its various docker containers to start up. That's 81 seconds longer than eternity.

Starting on the Micro-Stack

Enough was enough. I had no boss. I had no users. I had no chains. My self-respect was questionable.

I would re-implement the backend with redis streams, so that my game could be online 24/7, using leftover CPU and RAM on the t3.micro instance that I was already paying for.

Why not? Redis is awesome! And by choosing rust for this portion of the impl, I knew I could keep my memory and CPU footprint low.

The effort is ongoing. It will eventually be completed in the way that one completes a crocheted scarf.

Welcome KataGo

After months of relatively steady operation, I came to the sobering conclusion that most of my close friends were busy with their own lives, and playing a synchronous session of GO with them, often across international time zones, was an exercise in scheduling prowess that surpasses even the most skilled project manager.

So I began to wonder about playing against an AlphaZero-like AI through my browser.

If I can't play against my friends...

...why not BUILD MY OWN FRIEND?

Kata Meets Nano

Thus was born the effort to integrate KataGo, a community implementation of cutting edge Go AI that's gotten a nice little boost thanks to hardware sponsorship from Jane Street.

In fact, you can already play against KataGo online.

That's fine. I like building things on my own -- or in this case, integrating other people's things on my own. So I purchased an NVIDIA Jetson Nano System-on-a-Chip and set about linking it to my existing infrastructure.

The effort proceeded smoothly. Soon I was suffering regular blows to my pride, delivered easily by a 128-core GPU that can subside on a mere 5W of power. NVIDIA's pre-installed CUDA libs made compilation of KataGo relatively easy. Hooking up the tiny SoC in my office to my dirt-cheap AWS t3.micro felt pleasantly conservative.

Nvidia Jetson Nano Unboxed

A Happy Ending

There isn't much work left before the AI subsystem is complete. It's already possible to play against KataGo through the web interface, if you're patient enough to run a read-eval-print-loop and put Kata's pieces on the board manually.

After just a few more hours of work, I'll be able to head to my website from anywhere in the world, and play a power-efficient little bot that's just right for my skill level.

And then, free from the lamentable desire for human interaction, I'll play against my own, new type of friend: a friend born of silicon of and parallel arithmetic.

Love the Log

Losing peacefully to SkyNet

We're Also On Dev.to

This article was cross-posted on dev.to on March 30, 2020.

Further Inspiration

Don't let the crisis get you down.

help is on the way

Yoshiki Kurita (right), who was the first amateur to enter the C-League, [a student] in the second year of the Tokyo University of Science, will play against Mochizuki Kenpachi 18-dan (left) in the league's first match on [April] 2nd [...]. The C league has five races in all. If you win 5 consecutive times, you will participate in the challenger decision tournament and if you win 4 times, you will be promoted to the B league.

Thanks to asahi_igo🐦 for this tweet. Text translated with Google Translate.

Agent 57: Outperforming the Human Atari Benchmark (Deepmind)

This article talks about general intelligence as it applies to playing a bunch of Atari games.

American Go Assocation

American Go Assocation has made a lot of information available over a long period of time. Try browsing their Learn to Play page.

Their news column may contain rewarding reading.

Michael Redmond

Even abysmal players stuck with the English language can find some resources budding online.

Take a look at Michael Redmond's Twitter profile🐦 for interesting plays and tips.

Still Feeling Sluggish?

You can keep digging and find inspiration. Don't give up! β™ŸοΈ

Last Mile of KataGo (Apr 11, 2020)

At or around 4947cc6...

Notes

1200. Issue #67 is almost complete. KataGo responds if the human picks WHITE. The response is lost because the UI seems broken (undef dialog).

1500. If you play a 19x19 with human as WHITE, KataGo is able to send a move back. Highlighted that 9x9 has wrong board coords attached at some point in the system (Sabaki receives a MakeMove from BUGOUT with a negative coord).

But wait, the second turn is from the human, but looks duplicated ?

[2020-04-11T19:02:56Z INFO  micro_judge::io::stream] ACCEPTED: MoveMade { game_id: GameId(abb65823-febe-49fd-9d61-9b9374cf943f), reply_to: ReqId(a848cad4-6d46-44b9-8389-289427307f3d), event_id: EventId(c13797b3-5250-41f8-ac57-0005a59b1d6f), player: BLACK, coord: Some(Coord { x: 2, y: 3 }), captured: [] }


[2020-04-11T19:03:04Z INFO  micro_judge::io::stream] ACCEPTED: MoveMade { game_id: GameId(abb65823-febe-49fd-9d61-9b9374cf943f), reply_to: ReqId(3c34a9e5-911c-4d6f-9030-abd43aca4f75), event_id: EventId(2415ac47-2ece-423c-a9e9-acb861d11fb5), player: BLACK, coord: Some(Coord { x: 2, y: 3 }), captured: [] }

1522. Hacked Sabaki (3fc20268) to avoid sending duplicates. Now I see that game state changelog isn't being propagated correctly. Time for a break.

1542. Time to consolidate my progress in gateway and make a commit.

1636. I don't understand why micro-changelog tracks game states correctly, but micro-judge doesn't. The changelog service XADDs faithfully. micro-judge seems to miss the update.

1705. Judge is failing to update ITS game state changelog entry. Changelog service is reading its own stream and updating its repo correctly. BAD Judge. GOOD Changelog!

1716. Judge appears to be spinning in its stream processor. :

1738. I can see that the XRead EntryID in judge isn't updated for game state.

1755. Is the Redis XREAD deserialization type correct?

pub type XReadResult = Vec<HashMap<String, Vec<HashMap<String, HashMap<String, String>>>>>;

1812. Commit e73e8e2 has Judge assuming an XREAD with Vec<u8> data, not String. Gonna be annoying to pull this out if it doesn't help, which it probably won't.

1818. It worked! Judge now updates its game state changelog correctly. All it cost was an extra round of .clone()s. Well, I still can't progress past move two in Sabaki, but that's probably unrelated. I'm going to commit micro-judge and micro-changelog now, since they've moved forward.

1828. Raised my PR and am shutting down this very pricey t3.medium that I use for development. Isnt-it-romantic.

1954. Came back and merged my PR. Trying again with Sabaki to see where the AI's made move event gets dropped. Gateway and services seem solid enough now.

2004. Trying out deno file_server -p 8000 . instead of python -m SimpleHTTPServer:

2020. I am still tracing through Sabaki trying to find out why gtp.js function listenForMove doesn't get called. Done for now.

Easter 2020

Looks like Sabaki web branch is deprecated. Let's consider slimming down our forked unstable branch to the bare minimum and trying again with KataGo mods.

1324. Trimmed some dead classes out of Sabaki. Looks like deno file_server doesn't like to serve our /?join=ABC links! Back to python...

1350. Closed one set of trimmings and opened another.

1354. Here are more things we should delete:

  • fileformats/gib.js and ngf.js
  • App.js section if (prevState.fullScreen !== this.state.fullScreen)

1405. Opened a PR to remove update checking. Closed #54.

1425. Closed #55 and opened a change set trimming file format procedures. Deployed for a little bit more testing.

1501. Merged #56. Ventured into enginesyncer.js unsuccessfully. Gave up on that change set and opened another one trimming App.js and main.js.

1516. Test deploy. I'm close to <200KB package for users to download, which is nice. The initial build, at the very start of BUGOUT effort, was about 315KB?

bundle.js  191 KiB       0  [emitted]  main

1536. Almost messed up the QUIT button. Ko and suicide popups probably won't work, but that's OK. Still driving the simplicity score on this app UP. Yet another PR. 182KB size.

1702. For a minute I was worried that history provider routines in the browser were broken. No: as usual, history provider seems to intermittently not respond. A couple of reboots of the container host and it came back. Continuing to trim aggressively in #58, will redeploy.

1706. 180KB bundle! 🌟

1756. Is there anything else I can trim?

1822. 176KB. And I've trimmed out dead code from none other than the illustrious Goban.js.

1831. The "dead code" in Goban.js wasn't so dead. Almost lost the heat map at end-of-game. Still finding places to cut! 178KB.

2124. One last pass.

Mon Apr 13, 2020

1211. I am very close to having #67 finished. I need the browser to work with all of the server components. I don't believe there are any bugs or gaps left in the server components.

For the browser:

  • Do not check whether BUGOUT is online if the EntryMethod is PLAY_BOT.
  • Go slowly and be deliberate about what you add to the browser codebase.
  • GatewayConn needs a method playBot.
  • We need to be careful about the modals. On one hand, it's nice not to overload the existing modals, if possible. That said, the eventing code which supports the modals is a pain in the butt to work with.

Let's start by adding EntryMethod. We know that's going to be necessary.

1239. Inching into this effort: we broke up a huge constuctor while adding the new EntryMethod.PLAY_BOT option.

1249. Refactored the Color Choice dialog for play-vs-humans to have a clearer name. I don't want to reuse this dialog entirely for the bot-play. I'll create a new modal for bot-play, but will overload the choose-color-pref event which the original dialog emits.

Let's take a break.

1304. Slipped one more change set into the mix before my break. This is good. The audits that NPM had been complaining about were taking up more cognitive space than necessary -- just some pesky dev deps. β˜• BREAK TIME! β˜•

Tue Apr 14, 2020

1204. We can add some Modals.

1737. We added a wait for bot modal. Not actually using it, yet, but it's minimal and should work fine for our purposes.

Fri Apr 17, 2020

1635. I plan to refactor some of our multiplayer callbacks into a less-annoying, evented thing. I also plan to move Sabaki into the monorepo so that we can tie BUGOUT-wide releases to a given state of Sabaki.

1642. It takes seven minutes to write a ticket!? This is in no way surprising. It still saves 21 minutes of otherwise-future-time spent thinking about the issue.

1650. Chromium just sucks less during the web dev cycle. I find that I have a hard time getting the freshest version of my changes under FF private mode.

I HAVE SIX LINES OF CHANGES!

1708. Completed here. Let's consider making some progress on #67.

1756. One more PR to add a modal for color selection for the bot play. No, it's not yet hooked up to anything!

Sat Apr 18, 2020

1221. We should make a pass at the remaining UI work:

  • network calls to gateway
  • the initial dialog altered
  • re-use the existing board-size dialog

We may want to refactor the existing board-size dialog to make it more extensible. Not sure if it's necessary. Let's treat its existing behavior as closed.

1310. Extended the board-size dialog init condition.

Sun Apr 19, 2020

1907. Success! First ever bot game. But something broke in the system and one of the moves computed by KataGo wasn't communicated back to the browser. Doesn't matter, we'll figure it out. Happy path worked for the first time!

There is an important TODO TODO in gtp.js related to tracking color.

And we can only play black ? We need to listenForMove.

Mon Apr 20, 2020

1524. Whack-a-bug. Look at micro-judge:

[2020-04-20T19:19:06Z ERROR micro_judge::io::stream] MOVE REJECTED: MakeMoveCommand {
        game_id: GameId(
            22ff3e32-4d60-4582-a7d7-607b89e580d3,
        ),
        req_id: ReqId(
            78b33e57-e8ac-425c-93b7-ba0d5b00988b,
        ),
        player: WHITE,
        coord: Some(
            Coord {
                x: 3,
                y: 3,
            },
        ),
    }

1626. Investigating board.js coordToVertex and vertexToCoord in Sabaki.

1629. TINYBRAIN needs to honor the reversal in the Y-Axis.

1643. We won't translate the alphanumeric coordinate

1730. Strip PASS out at tinybrain. Send the (char, u16) combo up to botlink. botlink can remember the board size and convert back to domain-model Coord.

For a 9x9 board,

A1 = Coord { x: 0, y: 8 }

1902. PR open to fix all the coords.

Sun Apr 26, 2020

0431. Trying out SDKMAN! for my gradle install. Haven't needed gradle on this old workstation, until today. SDKMAN! is the recommended method for unixy installs of gradle, according to gradle's home page.

It went smoothly...

sdk install gradle 6.3 
Downloading: gradle 6.3

In progress...

############################################################################################# 100.0%

Installing: gradle 6.3
Done installing!


Setting gradle 6.3 as default.

That's great. Debian's default install was a woefully outdated version, which was unusable with the kotlin plugin.

0455. Raised a PR to emit an event when changelog inits a game state.

Experimenting with Fedora Core OS

As part of BUGOUT, we want to attempt supporting Fedora CoreOS. We previously ran our stateful container deployment on CoreOS Container Linux and enjoyed the automatic OS upgrades. It used a relatively small amount of RAM. We were too lazy to move beyond using docker-compose, and that wasn't a problem: after writing a couple of tiny systemd configs, our system always started up reliably.

But CoreOS Container Linux is being put to pasture, so we've decided to try the new thing.

First Steps with FCCT

We need to create an ignition file which is intended to be written once, and valid for the life of the image.

See some docs for getting started with this configuration tool.

They recommend using podman instead of docker, but there's no snap install available that doesn't warn about potentially stomping on our localdev ❀️ Debian ❀️ system.

sudo snap install --edge podman
error: The publisher of snap "podman" has indicated that they do not consider this revision to be
       of production quality and that it is only meant for development or testing at this point. As
       a consequence this snap will not refresh automatically and may perform arbitrary system
       changes outside of the security sandbox snaps are generally confined to, which may put your
       system at risk.

       If you understand and want to proceed repeat the command including --devmode; if instead you
       want to install the snap forcing it into strict confinement repeat the command including
       --jailmode.

Use docker to produce the ignition file, instead. First we created an example YAML file that fcct could read. This was a bit unclear based on the current state of the documentation, as we used .yaml for the file extension instead of .fcc.

# local example.yaml
variant: fcos
version: 1.0.0
passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-rsa AAAAB3NzaC1yc...
docker pull quay.io/coreos/fcct:release
docker run  -i --rm quay.io/coreos/fcct:release --pretty --strict <  input.yaml > example.ign

As promised, this command output an ignition file:

{
  "ignition": {
    "config": {
      "replace": {
        "source": null,
        "verification": {}
      }
    },
    "security": {
      "tls": {}
    },
    "timeouts": {},
    "version": "3.0.0"
  },
  "passwd": {
    "users": [
      {
        "name": "core",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc..."
        ]
      }
    ]
  },
  "storage": {},
  "systemd": {}
}

Launching an Instance on AWS

We can launch an instance on AWS. See the operating system getting started page.

The docs currently ask you to use the aws command line interface. I'm too lazy to do that. I don't want to (re)learn the options for the ec2 run instances command. I just want to plug some values in via the web interface.

To do that, click through the downloads page, look for for your AWS region, click through to the launch instance page within AWS, then look for the "user data" details.

user data for your igntion config

You can enter your ignition config here.

Finally, you launch the instance and connect:

Fedora CoreOS 31.20200407.3.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/c/server/coreos/

Setting up the system

Right now we're manually exploring the system to see how things feel.

git clone https://github.com/Terkwood/BUGOUT.git
cd BUGOUT
sudo usermod -aG docker $USER  # we'll fix this in next section
sudo reboot   # reboot helped

You can use ctop:

docker run -ti -v /var/run/docker.sock:/var/run/docker.sock quay.io/vektorlab/ctop:latest

You cannot use toolbox. It isn't really supposed to work due to the permissions structure of FCOS.

You can use rpm-ostree to install htop:

sudo rpm-ostree install htop
sudo systemctl reboot  # you need to reboot to get access to it

Exploring Packer

We are clearly going to need to manage the creation of this image. Let's try using Packer.

Here is their basic guide to build an image.

We can move on to running something a bit more interesting.

You first need to write a program.

cat >hello.ts
console.log("Welocem Friend πŸ¦•");

You need some env vars specified.

# set_deno_env.sh
export VPC_ID="vpc-deadbeef"
export SUBNET_ID="subnet-bad1dea5"

...then...

source set_deno_env.sh

Write some packer-example.json

{
    "variables": {
        "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
        "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
        "region":         "us-east-1",
        "vpc_id":         "{{env `VPC_ID`}}",
        "subnet_id":      "{{env `SUBNET_ID`}}"
    },
    "builders": [
        {
            "access_key": "{{user `aws_access_key`}}",
            "ami_name": "packer-linux-aws-demo-{{timestamp}}",
            "instance_type": "t3.micro",
            "region": "{{user `region`}}",
            "vpc_id": "{{user `vpc_id`}}",
	          "subnet_id": "{{user `subnet_id`}}",
            "secret_key": "{{user `aws_secret_key`}}",
            "source_ami_filter": {
              "filters": {
              "virtualization-type": "hvm",
              "name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
              "root-device-type": "ebs"
              },
              "owners": ["099720109477"],
              "most_recent": true
            },
            "ssh_username": "ubuntu",
            "type": "amazon-ebs"
        }
    ],
    "provisioners": [
        {
            "type": "file",
            "source": "./welcome.txt",
            "destination": "/home/ubuntu/"
        },
        {
            "type": "file",
            "source": "./hello.ts",
            "destination": "/home/ubuntu/"
        },
        {
            "type": "shell",
            "inline":[
                "ls -al /home/ubuntu",
                "cat /home/ubuntu/welcome.txt"
            ]
        },
        {
            "type": "shell",
            "inline": [
              "sudo apt install -y unzip",
              "curl -fsSL https://deno.land/x/install/install.sh | sh"
              
            ]
        },
        {
            "type": "shell",
            "inline":[
              "export DENO_INSTALL=\"/home/ubuntu/.deno\"",
              "export PATH=\"$DENO_INSTALL/bin:$PATH\"",
              "deno hello.ts"
            ]
        }
    ]
}

You'll see a bunch of glorious progress, and finally, an artifact:

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-east-1: ami-eeeeeeeeeeeeeeeea

Don't forget to deregister the AMI after you're done!

Putting It All Together With FCOS

That's all well and good, and we're happy about packer.

But we need to usable FCOS image which has a few modifications that packer, rather than ignition is well-suited to handle.

Create an ignition file which you'll include in the packer config:

variant: fcos
version: 1.0.0
passwd:
  users:
    - name: core
      groups: [ docker ]
      ssh_authorized_keys:
        - ssh-rsa AAAA...
storage:
  files:
    - path: /opt/bin/docker-compose
      overwrite: true
      mode: 0755
      contents:
        source: https://github.com/docker/compose/releases/download/1.13.0/docker-compose-Linux-x86_64
        verification:
          hash: sha512-9d2c4317784999064ba1b71dbcb6830dba38174b63b1b0fa922a94a7ef3479f675f0569b49e0c3361011e222df41be4f1168590f7ea871bcb0f2109f5848b897
docker run  -i --rm quay.io/coreos/fcct:release --pretty --strict <  packed.yaml > packed.ign

Then write your packer config:

{
    "variables": {
        "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
        "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
        "region":         "us-east-1",
        "vpc_id":         "{{env `VPC_ID`}}",
        "subnet_id":      "{{env `SUBNET_ID`}}"
    },
    "builders": [
        {
            "access_key": "{{user `aws_access_key`}}",
            "ami_name": "fcos-linux-aws-demo-{{timestamp}}",
            "instance_type": "t3.medium",
            "region": "{{user `region`}}",
            "vpc_id": "{{user `vpc_id`}}",
            "subnet_id": "{{user `subnet_id`}}",
            "secret_key": "{{user `aws_secret_key`}}",
            "user_data_file": "packed.ign",
            "source_ami_filter": {
              "filters": {
                "virtualization-type": "hvm",
                "name": "fedora-coreos-31.20200407.3.0",
                "root-device-type": "ebs"
              },
              "owners": ["125523088429"],
              "most_recent": true
            },
            "ssh_username": "core",
            "type": "amazon-ebs"
        }
    ],
    "provisioners": [
        {
            "type": "shell",
            "inline": [
                "sudo rpm-ostree install htop"
            ]
        },
        {
            "type": "shell",
            "inline":[
                "git clone https://github.com/Terkwood/BUGOUT.git"
            ]
        }
    ]
}

...and we have a baked image!

We used docker-compose 1.13 instead of 1.25 because python3.7 fails to link to libcrypt.so.1 on Fedora 31. ☹️

Launch Templates Are Helpful

Creating a launch template makes the AWS CLI invocation less annoying.

You can launch the instance using minimal parameters:

aws ec2 run-instances --launch-template LaunchTemplateId=lt-0000,Version=2 --image-id ami-0000 --subnet-id subnet-deadbeef

In the example above, we pasted the ignition user data into the web form. But you can override user data at the command line, if desired.

Completing the ignition

Next up, write some systemd data and figure out how to seat some dev.env files as .env files.

Using journalctl for monitoring

You can follow logs with journalctl:

journalctl -u bugout -f

DigitalOcean has a nice tutorial.

Friendly Memory Footprint

We were pleasantly surprised by amount of memory saved by switching from Container OS to Fedora Core OS. We haven't done any extensive testing yet, but the /proc/meminfo shown below were for the same workload (kafka box) under the same conditions (1-3 users playing, system online for less than 15 minutes).

Our old t3.medium Container Linux VM:

MemTotal:        3979308 kB
MemFree:         1793524 kB
MemAvailable:    1920252 kB
Buffers:           67600 kB
Cached:           449764 kB
SwapCached:            0 kB
Active:          1760416 kB
Inactive:         280408 kB

The new t3.medium Fedora CoreOS VM:

MemTotal:        3962968 kB
MemFree:         2971716 kB
MemAvailable:    3439920 kB
Buffers:            1108 kB
Cached:           626192 kB
SwapCached:            0 kB
Active:           414104 kB
Inactive:         405596 kB

Juneteenth + 1, 2020

Greetings. We are attempting a dev-gateway build with a t3.xlarge.

Started at 1014 local time.

The dev build on a t3.medium takes a LONG time: 40 minutes plus ! This is because we build a number of things:

  • gateway
  • reaper
  • micro-judge
  • micro-changelog
  • micro-game-lobby
  • bugle

And we rely on CADDY and REDIS images, also!

Observations

Just that the build is faster. This is not a measured or accurate statement, but on the other hand we've done packer builds over and over using t3 medium, because we're stingy! And the experience with XL is moving along at a better pace.

Goal πŸ₯…

WE WANT TO BUILD tinker/streams BRANCH ON THE LAUNCHED GATEWAY INSTANCE!

We want to move all rust stream based services to implementations that do not require tracking entity IDs on our own. We will use XREADGROUP and let Redis track the IDs.

We will need to write code which performs creation of consumer groups

Trying Out Yugabyte

Let's try installing a three-node Yugabyte cluster on a 2011-era laptop running Debian linux!

The quickstart guide is here..

We ran the suggested:

./bin/yb-ctl --rf 3 create

But it initially failed, because we had a Redis server running on the default port. So let's turn that off...

sudo systemctl stop redis

We need to destroy the partially-created system before retrying:

 ./bin/yb-ctl --rf 3 destroy

Then we retried the creation of three nodes, but our disks didn't spin fast enough to satisfy the default timeout values! So we found another failure.

Creating cluster.
Waiting for cluster to be ready.
Traceback (most recent call last):
  File "./bin/yb-ctl", line 2021, in <module>
    control.run()
  File "./bin/yb-ctl", line 1998, in run
    self.args.func()
  File "./bin/yb-ctl", line 1755, in create_cmd_impl
    self.wait_for_cluster_or_raise()
  File "./bin/yb-ctl", line 1598, in wait_for_cluster_or_raise
    raise RuntimeError("Timed out waiting for a YugaByte DB cluster!")
RuntimeError: Timed out waiting for a YugaByte DB cluster!
...snip...
Error: Leader not ready to serve requests. (yb/master/scoped_leader_shared_lock.cc:93):
 Unable to list tablet servers: Leader not yet ready to serve requests: leader_ready_te
rm_ = -1; cstate.current_term = 1

Let's increase all the possible timeout values and hope that the install works:


# values in seconds
./bin/yb-ctl --timeout-yb-admin-sec 3000 --timeout-processes-running-sec 3000 --rf 3 create

This time it worked!

Creating cluster.
Waiting for cluster to be ready.
---------------------------------------------------------------------------------------
-------------
| Node Count: 3 | Replication Factor: 3
            |
---------------------------------------------------------------------------------------
-------------
| JDBC                : jdbc:postgresql://127.0.0.1:5433/postgres
            |
| YSQL Shell          : bin/ysqlsh
            |
| YCQL Shell          : bin/ycqlsh
            |
| YEDIS Shell         : bin/redis-cli
            |
| Web UI              : http://127.0.0.1:7000/
            |
| Cluster Data        : /home/whoever/yugabyte-data
            |
--------------------------------------------------------------------------------------$
-------------

Can we connect to the redis-like interface (YEDIS)? Yes.

./bin/redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>

Can we connect to Yugabyte as a SQL database using the provided shell script? Yes.

bin/ysqlsh

Let's create a user and then try to connect with one of our favorite SQL GUIs, SQLWorkbench/J.

yugabyte=# create user whoever with password 'superhardtoguess';
CREATE ROLE

Now we can connect using SQLWorkbench/J. We need to check the box for AUTOCOMMIT so that it's enabled. Notice that the port differs from the default of 5432. Also, helpfully, the team at Yugabyte made sure that there's a tutorial for connecting with SQLWorkbench/J. Thank you!

Connect with SQLWorkbench/J

So we're connected, but can we create a table? Yes. We waited 20 seconds for it to happen.

Creating a basic table

Thanks for Reading

This article was drafted on July 14, 2020.

DenoLand

Articles on Deno.

ALL ABOARD

All Aboard the Deno Hype Train: Trimming AWS Tedium

This article explores the use of Deno to automate repetitive AWS clean-up tasks. We conclude by installing a cron job that wakes up our laptop and disposes of our AWS testing resources every night.

Please be advised that this article is for educational purposes only.

If you blindly follow all the instructions, you'll end up deleting all your AMIs and snapshots. ⚠️

Detail

As part of our daily development cycle, we create a couple of virtual machine images in AWS, then forget about them for a while. At some point, we boot them up and use them, and then throw them away.

Once the instances are running, we have no more need for the AMIs from which they were created, and, crucially, we don't want to continue spending on the snapshot volumes to which the AMIs are registered.

πŸ’Έ Snapshots cost money! πŸ’Έ

So, for the last few days we've followed a manual workflow to clean up our newly created machine images and snapshots, and that's a big waste of time.

Instead of using the AWS command-line interface to manage the clean-up of these snapshots and machine images, we decided to write a small script using Deno! After all, the documentation claims that it's a good fit for use cases where you would otherwise employ a small bash or python script.

Follow along and decide for yourself whether it's an improvement.

Tedious, Motivating Example

So, this is the painfully slow workflow that we used to clean things up by hand. Sure, it only takes a minute, or maybe two (if we're feeling slow). But keep in mind that we've been doing this every day for the last week... 🀒

First, we find the images we control:

aws ec2 describe-images --owners self|grep ami

▢️ ▢️ ▢️

"ImageId": "ami-0aaaaaaaaaaaaaaaa",
"ImageId": "ami-0bbbbbbbbbbbbbbbb",

Deregister (disassociate) their respective volumes:

aws ec2 deregister-image --image-id ami-0aaaaaaaaaaaaaaaa
aws ec2 deregister-image --image-id ami-0bbbbbbbbbbbbbbbb

Now that the images are deregistered, find all of our snapshots:

aws ec2 describe-snapshots --owner self | grep snap-

▢️ ▢️ ▢️

"SnapshotId": "snap-0cccccccccccccccc",
"SnapshotId": "snap-0dddddddddddddddd",

Finally, destroy these snapshots

aws ec2 delete-snapshot --snapshot-id snap-0cccccccccccccccc
aws ec2 delete-snapshot --snapshot-id snap-0dddddddddddddddd

What colossal a waste of time!

Pleasantly Hyped Automation

OK, here's the fun part. Let's rewrite this junk using the shiniest, newest tech! πŸ¦•

Our workflow requires that we look up both Amazon Machine Image (AMI) IDs, as well as Snapshot IDs. We deregister the AMIs one by one, and we delete the snapshots one by one, as well.

Executing a command via subprocess is easily accomplished using Deno.run. Here we use stdout: "piped" because we'll want to capture the output from the command and manipulate it:

const p = Deno.run({ cmd: ["/usr/bin/aws", "ec2", "describe-images", "--owners", "self"], stdout: "piped" });

const { code } = await p.status();

if (code !== 0) {
    const rawError = await p.stderrOutput();
    const errorString = new TextDecoder().decode(rawError);
    console.log(errorString);
    Deno.exit(code);
}

If we were to run this command directly in a terminal, we'd see something like this:

{
    "Images": [
        {
            "Architecture": "x86_64",
            "CreationDate": "1970-01-01T08:25:24.000Z",
            "ImageId": "ami-0aaaaaaaaaaaaaaaa",
            // ... SNIP ...
        },
        {
            "Architecture": "x86_64",
            "CreationDate": "1970-01-01T08:15:13.000Z",
            "ImageId": "ami-0bbbbbbbbbbbbbbbb",
            /// ... SNIP ...
        }
    ]
}

Using grep to pull out the ImageId field will leave us with an ugly line of text which we then need to further trim down to the actual ID. And we're too lazy to write a proper regex. 🌝

Thankfully, Typescript makes this dead simple for us, since it handles JSON very naturally:

const { Images } = JSON.parse(new TextDecoder().decode(await p.output()));

for (let { ImageId } of Images) {
  await Deno.run({ 
        cmd: ["aws", "ec2", "deregister-image", "--image-id", ImageId] 
    });
}

Fast-forwarding a bit, we can streamline our code by pulling out the JSON.parse call used with every AWS CLI invocation, and declaring it as a function:

const parseProcessOutput = async (p: Deno.Process) =>
  JSON.parse(new TextDecoder().decode(await p.output()));

We also wanted to stop "typing", "all", "our", "arguments", "to", "the", "Deno.run", "cmd" as comma-separated strings, because we're lazy πŸ‘Ό:

const awsEc2Cmd = (argStr: string) => {
  let o = [];
  o.push("/usr/bin/aws");
  o.push("ec2");
  for (let s of argStr.split(" ")) {
    o.push(s);
  }

  return o;
};

With these little helpers declared, cleaning up our expen$ive image snapshots is now easy:

const dsp = await runOrExit(
  {
    cmd: awsEc2Cmd("describe-snapshots --owner self"),
    stdout: "piped",
  },
);

const { Snapshots } = await parseProcessOutput(dsp);

for (let { SnapshotId } of Snapshots) {
  await runOrExit(
    {
      cmd: awsEc2Cmd(`delete-snapshot --snapshot-id ${SnapshotId}`),
      stdout: undefined,
    },
  );
}

That's fully HALF of our daily dev trash that we generate. We also tend to run a couple of instances in our AWS dev environment. We want to terminate both of those instances. One of the two will usually have an elastic IP associated with it, so we make sure to release that IP and avoid charges.

Here's the complete script, which depends on our above helpers being defined in a procs.ts file:

import { runOrExit, parseProcessOutput, awsEc2Cmd } from "./procs.ts";
import { config as loadEnv } from "https://deno.land/x/dotenv@v0.3.0/mod.ts";

console.log(loadEnv({ safe: true, export: true }));

// This is the instance tag "Name", used to identify
// our dev environment instances.
// It's loaded from a .env file which looks like this:
//
// KEY_NAME=my-fancy-dev-instance-tag
const KEY_NAME = Deno.env.get("KEY_NAME");

let instsDescd = runOrExit(
  { cmd: awsEc2Cmd("describe-instances"), stdout: "piped" },
);

let addrsDescd = runOrExit(
  { cmd: awsEc2Cmd("describe-addresses"), stdout: "piped" },
);

const { Reservations } = await parseProcessOutput(await instsDescd);

let instancesToTerminate = [];
for (let { Instances } of Reservations) {
  for (let { InstanceId, KeyName } of Instances) {
    if (KEY_NAME === KeyName) {
      instancesToTerminate.push(InstanceId);
    }
  }
}

const { Addresses } = await parseProcessOutput(await addrsDescd);

let addressesToRelease = [];
for (let { InstanceId, AllocationId, AssociationId } of Addresses) {
  if (instancesToTerminate.includes(InstanceId)) {
    addressesToRelease.push({ AllocationId, AssociationId });
  }
}

if (addressesToRelease.length > 0) {
  console.log(`Addresses to release  : ${JSON.stringify(addressesToRelease)}`);

  for (let { AssociationId, AllocationId } of addressesToRelease) {
    await runOrExit({
      cmd: awsEc2Cmd(`disassociate-address --association-id ${AssociationId}`),
    });

    await runOrExit({
      cmd: awsEc2Cmd(`release-address --allocation-id ${AllocationId}`),
    });
  }
}

if (instancesToTerminate.length > 0) {
  console.log(
    `Instances to terminate: ${JSON.stringify(instancesToTerminate)}`
  );

  await runOrExit({
    cmd: awsEc2Cmd(
      `terminate-instances --instance-ids ${instancesToTerminate.join(" ")}`
    ),
  });
}

Deno.exit(0);

When I run this script, my credit card sighs in glorious relief. πŸ€“

The Triggered Clean-Up Nirvana

Yes, we prefer to watch Stargate SG-1 and Fringe at 9pm. We do not remember to clean up our precious AWS resources.

We need a cron job!

It should wake up this x86_64 laptop from sleep, and use our local AWS credentials to trigger the cleanup scripts.

Why not just run this on a local raspberry pi, or something? We have at least one of those running 24/7, so we wouldn't need to mess with forcing a wakeup from sleep.

Well, it turns out that Deno can't run on ARM platforms yet. Oh well!

As long as we can wake up our power-hungry laptop reliably, we don't mind a little bit of a hacky solution, here. It beats waking up our human body to take care of such a tedious task!

Try Out RtcWake

We'll attempt to use RtcWake to jolt our laptop into consciousness.

🎡 πŸ”ˆ First, put the music on nice and loud... 🎹 πŸ‘Ύ

# install debian & ubuntu
sudo apt install -y util-linux

# put the laptop to sleep for 10 seconds, then resume
sudo rtcwake -u -s 10 -m mem

Blackout... πŸ™ˆ

...and in 10 seconds, we heard our tunes pop right back into place! πŸ‘‚ 🏁

Scheduled Wake-Up

We're gonna need to do this by cron, so we need to be able to

sudo rtcwake -m no -l -t $(date +%s -d 'today 19:15')

⌚️ Try this at 19:14, then wait...

🌞 No problems!

Add some crontab love πŸ’Ÿ

Well, we left a disk sitting around in AWS for a couple of weeks and it cost us a bit of money. We dithered and didn't bother to release our article. NOW WE HAVE GREAT RESOLVE!

We shall run a crontab.

We shall always clean up our expensive dev trash!

We shall google for how to do this and then tell you about it.

Entering the local user and editing the crontab file for correctness before committing it

sudo crontab -u FRIENDLY_USER -e

This was our test run. We tried a deno script which invoked wall, to make sure we could run our more complex deno logic for the dev environment & AMI/snapshot cleanups.

test drive

Final Plan πŸ“†

  • crontab: (as root!) periodically schedule RTC wake at 23:27
  • crontab (as user, three minutes later): destroy all snapshots and AMIs using deno script
  • crontab (as user, also three minutes later): destroy any dev environment instances

That's it. Our laptop is configured to go back to sleep relatively quickly, so we shouldn't burn too much disgusting coal power (don't hate, we live in Indiana 🀒 🏭) after it kicks on.

The RTC Wake-Up Spammer used by ROOT Crontab

rtc spammer as root

The cleanup crontab for Normal User

normal user cleanup

We need to make sure the KEY_NAME var is exposed in the environment. First attempt above, failed.

Add another pic...

Checking for Correctness

We should make sure that the instances are destroyed, their disks destroyed, elastic IP released completely, snapshots destroyed, AMIs destroyed.

Although our overall approach is a hack, we're willing to accept the disorganization... as long as everything actually works!

References and Attributions

train image by fsse8info is licensed under CC BY-SA 2.0.

The initial nugget of subprocess management originates in the Deno manual subprocess example.

upgrading deno file server

deno install -f --allow-net --allow-read https://deno.land/std/http/file_server.ts

Using Deno in VsCode

Have you recently switched from the deprecated VsCode Deno extension, to the new one?

Are you seeing red squigglies? πŸ”΄

You can fix it by reading this solution.

Read the one above it too.

Penguin Hell

These articles describe tidbits of linux and command line practices that I've recently developed.

If you just want to be happy, you should ignore all my technobabble and head straight for the Hannah Montana Linux website.

Privileged mode: Running ctop in docker under SELinux

Some programs like ctop are nice to run using docker containers, so that you don't have to manually download a binary and copy it into /usr/local/bin, where it will sit in a sad little corner, unmanaged by apt or yum.

But if you run an SELinux-enabled distribution, you'll find that running ctop as the documentation suggests, fails:

docker run --rm -ti \
  --name=ctop \
  --volume /var/run/docker.sock:/var/run/docker.sock:ro \
  quay.io/vektorlab/ctop:latest

🐳 🐳 🐳

ctop - error ───────
  β”‚                                                                                 β”‚
  β”‚  [12:54:15 UTC] attempting to reconnect...                                      β”‚
  β”‚                                                                                 β”‚
  β”‚  [12:54:16 UTC] Get http://unix.sock/info: dial unix /var/run/docker.sock: con  β”‚
  β”‚  nect: permission denied                                               

What's going on here? Presumably SELinux is blocking the ctop container's access to information necesarry for monitoring the other containers.

The Fix

Luckily, there's a very easy fix for this! You can just run the ctop container in privileged mode:

docker run --privileged --rm -ti \
  --name=ctop  \
  --volume /var/run/docker.sock:/var/run/docker.sock:ro \
  quay.io/vektorlab/ctop:latest

Now you can see all your favorite containers, interact with their log interfaces, dip into their shells, etc:

  ctop - 12:57:42 UTC   8 containers

     NAME        CID         CPU         MEM         NET RX/TX   IO R/W      PIDS

   β—‰  bugout_bot… 8f84fe3983…      1%       2M / 944M 19M / 16M   1M / 0B     6
   β—‰  bugout_bug… 16f1479be4…      0%       3M / 944M 1M / 1M     6M / 0B     5
   β—‰  bugout_gat… fc951914df…      0%       3M / 944M 56M / 35M   256K / 0B   21

Use Keyboard Shortcut to Focus alacritty in Gnome

alacritty is a πŸ”₯ hot, fast terminal emulator πŸ”₯ written in rust. We've been using it for the last 38 microseconds and really enjoy it.

We want to hit a single key and focus on alacritty while using GNOME. If alacritty isn't already open, it should start up. This would give us functionality somewhat equivalent to Guake.

As a bonus, we'll use deno to write a quick helper script and πŸ˜‡ avoid learning bash. πŸ˜‡

Here's what we did to accomplish this task.

Install wmctrl

Install wmctrl so that we can focus on an existing alacritty window using a program.

For debian & ubuntu-flavored linux:

sudo apt install wmctrl

For the redhats:

sudo yum install wmctrl

Start alacritty up for just a moment and take note of its handle as given by wmctrl -xl:

...snip...
0x04800002  1 Alacritty.Alacritty   mybox Alacritty
...snip...

We can then use wmctrl to focus on a running instance of alacritty:

wmctrl -xa Alacritty.Alacritty

Create script to raise or start-up terminal

Create a deno script to either focus on the existing alacritty window, or start a new one:

// saved to /home/nope/bin/raise_alacritty.ts
const p = Deno.run({
    cmd: [ "/usr/bin/pgrep", "alacritty" ]
});

const { code } = await p.status();

if (code === 0) {
    await Deno.run({ cmd: [ "wmctrl", "-xa", "Alacritty.Alacritty" ] });
} else {
    await Deno.run({ cmd: [ "alacritty" ] });
}

Bind keyboard shortcut

Finally, in gnome settings, add a keyboard shortcut which runs our script.

keybinding

In our case, we assigned the special MENU button on our keyboard to focus on alacritty. We use the default GNOME shortcut (Super H/WindowsKey H) to hide the window when we're done with it.

Updated for deno 1.0.0-rc3

Please note that as of deno 1.0.0-rc3, the command line invocation needed to make this work has changed. You now need to use the run subcommand:

deno run --allow-run /path/to/raise_alacritty.ts

This article was originally published under an older version, which did not require the subcommand.

deno  --allow-run /path/to/raise_alacritty.ts

If you're having trouble getting this to work under the new version of Deno, try adding run!

be easy

Easy scrolling with tmux and alacritty

We want to have a nice experience when scrolling the terminal buffer using both alacritty and tmux on Debian linux.

When we run alacritty and tmux together and there's a lot of text on the screen (say, by invoking tree), we need to hit the following tmux key sequence to scroll back through the terminal history:

Ctrl+B 
[
PageUp

This is difficult. Others have asked about this.

We followed the workaround advice given by the author. We disabled faux scrolling in .tmux.conf:

set -ga terminal-overrides ',*256color*:smcup@:rmcup@'

In .alacritty.conf, the default scrolling key combination, Shift+PageUp, isn't too bad. We can just leave it commented to use it, but if we want to try our own settings, we can:

key_bindings:
#
# ...snip...
#
- { key: PageUp,   mods: Shift, action: ScrollPageUp,   mode: ~Alt       }
- { key: PageDown, mods: Shift, action: ScrollPageDown, mode: ~Alt       }
- { key: Home,     mods: Shift, action: ScrollToTop,    mode: ~Alt       }
- { key: End,      mods: Shift, action: ScrollToBottom, mode: ~Alt       }

Resolution

With the changes to .tmux.conf, we can now start an alacritty terminal, enter tmux, and scroll to our hearts' content, without using a difficult key combination.

πŸ”₯ It's FAST! πŸ”₯

We found that scrolling through history with some basic command line apps wasn't negatively impacted.

When we entered tmux + alacritty, and then dumped a lot of text into less, or navigated man bash, we scrolled happily and without interruption.

These instructions probably apply to other terminal emulators, as well.

This is a happy time. πŸ˜„

Cross-post

This article was cross-posted on dev.to.

Systemd and Systemctl

systemd is really useful. It's a great way to ensure services start up on linux servers reliably.

Fedora CoreOS features systemd units in its machine-provisioning specification.

Resources

Copying to the clipboard from the Linux command line

Just use xsel.

Debian install:

sudo apt install xsel

Demo:

cat /tmp/takeout-order | xsel --clipboard

AWS CLI crib notes

AWS CLI isn't very exciting, but it's helpful.

Starting and Stopping Instances with Wait

You can start and stop instances. Take note of the wait instance-stopped command, which makes this little trick good.

aws ec2 stop-instances --instance-ids $INSTANCE_ID 
aws ec2 wait instance-stopped --instance-ids $INSTANCE_ID
aws ec2 modify-instance-attribute --instance-id $INSTANCE_ID --instance-type $INSTANCE_TYPE 
aws ec2 start-instances --instance-ids $INSTANCE_ID

Attach an Elastic IP to an AWS Instance using CLI

aws ec2 allocate-address --domain vpc  # returns an eipalloc
aws ec2 associate-address --instance-id i-aaaaaaaaaaaa --allocation-id eipalloc-000000000000

Destroy Your AMIs and Their Snapshots!

We frequently create AMIs with packer. This script can help clean up the images and their associated snapshots.

aws ec2 describe-images --owners self|grep ami
aws ec2 deregister-image --image-id ami-0000aaaa
aws ec2 describe-snapshots --owner self | grep snap-
aws ec2 delete-snapshot --snapshot-id snap-aaaa0000

Vim

I am just now learning vim in some earnest. I was always more of an emacs guy, back in the day...

Macros in Vim

Seriously, they're SO EASY.

Record a macro:

q<letter><commands>q

Execute a macro number times, once by default:

<number>@<letter>

Copying Text into Other Linux Apps

It's not fun to copy text out of vim and into other linux applications, but here's how you do it:

  1. Select the text in vim using your mouse!
  2. The text is now in your "middle click register"
  3. You can now use the mouse middle click to paste it into other apps

Get Into Teh Streamss

Adding Redis Streams to Rust πŸ’Ύ πŸ¦€

I recently worked on a pull request which adds Redis Streams capabilities to redis-rs, the most popular Redis client in Rust's ecosystem. The overwhelming majority of the effort was contributed by the community, not by me: I drafted the pull request which combines the two existing works. In addition to addressing review comments, I added a few examples of how the new API works.

I'm feeling the hype! πŸ”₯ What is Redis Streams? Why do we need it in Rust? Does it have anything to do with Kafka Streams? And can we share any real-life examples?

Please read on...

What is Redis Streams? πŸ€”

To give a little bit of background, Redis Streams was released as part of Redis 5.0, all the way back in Oct 2018.

antirez posts a great explanation of the new features.

You can think of Redis Streams data as a way to communicate time-indexed data among processes. Oversimplifying this, you can imagine that each stream is a CSV with a timestamp. Digging deeper, Redis Streams exposes functionality that's more advanced than basic pub/sub, as you can have multiple consumers process a single stream: even if one consumer happens to be faster than the others, Redis will rationally distribute the data.

Unlike Redis's pub/sub mechanism, Redis Streams are somewhat durable. If you miss a message, it's still available in the stream, with limits; a stream will generally have a cap on the amount of messages allowed to accumulate.

Why Does Rust Need Redis Streams? πŸ”Ž

The Streams commands are over a year old, so there was some support expressed for including them in redis-rs.

It's great that the community came together and created a separate lib exposing the Redis Streams API. But Redis Streams is a first-class citizen of the larger Redis API, so it makes sense to include it as part of the leading Redis crate.

We've exposed the new Redis Streams commands as an opt-out feature in redis-rs. If you want to reduce your compile time 🐌 πŸ¦€, you can explicitly disable streams support. This is the same as how geospatial operators work in redis-rs, so it should be a familiar concept for developers who have experience with the lib and who want to try out Streams.

Off-the-Cuff Comparison with Kafka Streams 🌽

After looking into Redis Streams on my own, I did some initial comparisions with Kafka Streams and came to some conclusions:

  • Redis Streams is a subset of the Redis server API. The stream commands are exposed by the basic Redis server, not through separate lib. On the other hand, Kafka Streams is a combination of the server (Kafka) combined with a JVM-based framework/lib (Kafka Streams).
  • Kafka Streams apps coordinate data partitioning on the client side, while in Redis Streams, the server decides which consumer group gets which slice of data. This helps Kafka achieve extraordinary scale: clients can work together to distribute load without the server becoming a bottleneck.
  • Naive Kafka Streams apps consume a LOT of RAM, so if you're stingy about that sort of thing (and don't plan on scaling to infinity), Redis Streams can be a great fit.
  • The data types exposed by Redis Streams are somewhat deeply nested, and can take a little while to get used to.

Despite the similar naming conventions, the use cases for Kafka Streams and Redis Streams don't overlap as much as you might think. Redis works great when you have one or two boxes (potentially with an enormous amount of RAM). Kafka is built for massive, horizontal scale.

Practical Comparison πŸ”§

Out of sheer, nerdtastic enthusiasm, I had already written several Kafka Streams applications to process game states and various player-coordination functions for BUGOUT, our implementation of the ancient board game Go (Baduk). I went ahead and rewrote some of this functionality using rust and Redis Streams. You can see some direct comparisons of how one might structure a Redis Streams app versus a Kafka Streams app.

The declarative style of Kafka Streams apps really stood out as an advantage: we could just focus on the logic required by our game system, and didn't have to write quite as much boilerplate.

But our Redis Streams apps written in rust were minuscule in terms of their memory consumption: in a cold system, just after startup, the micro-judge and micro-game-lobby apps take up about 1MB of RAM, while the Kafka Streams apps ⚠️ usually initialize at 100MB+ ⚠️. Meanwhile, the Redis server itself continues to cruise along with a similarly small main-memory footprint (~3MB in our low-traffic system).

Note that these Redis examples don't actually use the nicer API that's discussed as the main focus of this article -- generally we're using the lowest-level interface which specifies Redis commands using strings. We'll work on upgrading these files once our pull request is merged!

Look Ma, I'm Learning! 🧠

As a result of working the merge, I improved my understanding of the concepts underlying Redis Streams.

Creating an example of XREADGROUP command patterns gave me an immediate insight into a shortcoming of my board game project: I was maintaining unnecessary boilerplate code, tracking the time IDs processed in a given stream. This code can be destroyed once I switch from naive XREAD to Redis-controlled XREADGROUP.

Using the ">" operator in an XREADGROUP command tells Redis, "hey, give me only the newest records... and YOU keep track of where I am in the stream!" This functionality, combined with the automatic XACK provided in in the new additions to redis-rs, makes for a nice combination.

Conclusion πŸ’›

If you're excited about seeing the Redis Streams support finally make it into the Rust ecosystem in a nice way, please dust the PR with emojis, or better yet, with critical review. πŸ”¬

If you're using Redis Streams in your own work, we'd love to hear from you.

Attribution

Thank you to Audrey for the image used in the header. It is licensed under CC by 2.0. I cropped the image in order to make it fit a bit better as a header.

Around the Web

You can also find this article on LinkedIn and dev.to.

The MIDI Light Show

It all started with an innocent attempt to learn anything about electronics. We liked rust πŸ¦€, and we liked a Raspberry Pi 3 B+ πŸ€–, and we liked Bach 🎡.

Who doesn't, right?

Arrangements

We went thoroughly overboard and created video for several of these productions.

JS Bach, Fantasia in G Major

MIDI arrangement by Jamie Holdham, licensed under CC 1.0 Universal Public Domain Dedication

Johannes Roussel created the Electric Piano Soundfont, which is used to render MIDI into audio wavelengths.

GF Handel, Organ Concerto in G Major Op 4 No 1

The audio was sourced through the generous efforts of fellow free-license contributors:

  • Charles Adams transcribed the MIDI via MuseScore. It is licensed under CC 4.0. (The score itself: https://musescore.com/user/18579/scor... ; his MuseScore page: https://musescore.com/user/18579 ; the CC 4.0 license: https://creativecommons.org/licenses/...) The MIDI itself was modified only slightly, to combine all notes into a single clef.

  • Johannes Roussel created the high-quality Church Organ Soundfont.

Erik Satie, Ogive #2

This MIDI light show is powered by a Raspberry Pi with 24 LEDs. Since each musical octave has 12 half-steps, it allows us to show two full octaves of sound before "wrapping".

In this selection, we play the second of four "Ogives" written by Erik Satie, a composer from the early 20th century.

This description of the music is provided by Wikipedia:

An ogive is the curve that forms the outline of a pointed gothic arch. Erik Satie gave this title to a set of four piano miniatures published in 1886 at the beginning of his career. Their calm, slow melodies are built up from paired phrases reminiscent of plainchant. He wanted to evoke a large pipe organ reverberating in the depth of a cathedral, and achieved this sonority by using full harmonies, octave doubling and sharply contrasting dynamics.

Source for this MIDI

The Magic of MIDI: Eternity

The music was found on archive.org, in "The Magic of MIDI", a nearly 1GB zip file containing thousands of songs.

Learn More

Learn more about the wiring used in this prototype, read the source code for the MIDI playback and LED light management, or see additional credits related to the open source software we used, at our Github page.

The Prawnalith: Electronics, Tanks, and Glory

The Prawnalith

My wife and I learned to raise chickens for eggs (mostly) and meat (sometimes), having been inspired by Michael Pollan and Joel Salatin in 2011. She has since gone on to do a lot of research into raising freshwater prawns, and experimented with breeding them in 2018.

At the same time, we took on a minor interest in electronics and created several little utilities related to monitoring temperature and pH for prawns raised in a 200 gallon stock tank, and a 55 gallon aquarium. We also built a simple, wireless device to monitor the temperature and humidity of a local area: the building where we initially kept the tank, or, nowadays, our front yard. We developed a very humble, scrolling LED display that kept us apprised of our data from the comfort of the living room. We tracked the current temperature and pH data so that it could be displayed on a secure web application. And we experimented with viewing long-term charts of the data in grafana, using InfluxDB as a time series database.

Pleasantly Scrolling

Now it's 2020, and we're not currently raising prawns, but the old stock tank is being used as a reservoir to help sustain a small crop of onions in an aquaponics table. Maybe we'll raise some fish?

Regardless of the hobbyist agriculture angle, it's time to revisit the hobbyist electronics treasure trove. It's been too long.

May 21, 2020.

Lab Notes

Today we're attempting to reinitialize the entire effort.

State of Affairs: May 21, 2020

Trying rust on the ESP8266

πŸ¦€ Ah, rust.

πŸ’Έ Ah, the $6 ESP8266 microcontroller.

What could be better than spending many, many hours trying to make them work together?

In fact there are several potential ways to go about it, but it looks like there's a new approach recently published online.

Let's give it a try. The following procedure will download a gigantic XTENSA toolchain which we can hopefully use to push some simple rust code onto an ESP 8266. The docker download is a beefy 5GB! πŸ„

$ cat /tmp/bootstrap-esp.sh
#!/bin/bash
docker run -ti -v $PWD:/home/project:z quay.io/ctron/rust-esp create-project

$ sh /tmp/bootstrap-esp.sh
Creating Makefile (Makefile)
Creating esp-idf symlink (esp-idf -> /esp-idf)
Creating cargo config (.cargo/config)
Creating main application wrapper (main/esp_app_main.c)

vision project

This is our small experiment with object detection in tensorflow.js (pre 1.0!): vision.prawn.farm

It does not collect any data from user devices and is intended simply as a "hello world" style introduction to object detection in TFJS.

The source for this project is available on Github.

Object Detection Resources

Raspberry Pi + H264 Live Camera + Web

We found a really nice project which lets you stream video from a Raspberry Pi (model 3 B+, in our case). We connected the Camera Module v2 to ours, and had good results.

The project uses a reasonable frame rate to deliver nice results, despite the Pi's small processing power.

We created a fork of the project which kills the raspivid project on disconnect, and simplifies the UI. It also changes the default port to 80, so that you have a nicer URL available.

OSS Commitment Tracking

In an effort to manage time and prioritize our work, this page tracks our outstanding commitments made to various open source projects.

Outstanding

Completed

The Junk Drawer

Everything in this section has so far resisted categorization.

Maybe we'll find a place for it later.

Stuff to Tinker With

I'd like to look further into some of these items.

Articles

Papers

  • On the Measure of Intelligence. "To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans."

lolbots πŸ™„ 🀣 πŸ€– πŸ’©

Why not just make a bot which responds ironically to our discord server? It should train itself based on our local subgroup of freaks and weirdos. Obviously, we would make the source open for the public to scrutinize, because if we don't keep tinkering frenetically with two-star projects on Github, then we won't achieve immortality.

We could try using the DeepMoji project to somehow mix sarcasm into the intellectual brew, but we still need to investigate, and make sure that this isn't going to eat up weeks of time. I mean, we sort of coasted through that data mining course in grad school without considering the 72 Trillion Dollar AI Hype Machine that would barrel into the world a few short years after we graduated. Dear Reader, we're not machine learning experts over here. Just a curious application developer.

but really

so much love

You wanna try?

The test cases are what led us this far down the rabbit hole. Look at this test case. JUST LOOK AT IT! πŸ’– πŸ‡

deepmoji test cases

OK, you get the idea. Go ahead and read more about the DeepMoji project from MIT if you want to.

So What?

We already have an NVIDIA Jetson Nano sitting on our desk computing the moves for our Goban using KataGo - a slick, community-sponsored neural net. Please don't judge us by the level of dust collecting on this lil' guy -- we're working in a very dynamic environment. 🀧

lilbrain in situ

We even created 1/3 of the necessary architecture in our recent project: we could just string up a websocket from the local NVIDIA GPU, to a cloud provider, and avoid exposing our blessed domicile's network surface to the world of trolls, bots, and whatever else. Rather than creating some sort of PULL-based situation, we would prefer to have some cloud-hosted Discord gateway push data down to the NVIDIA Nano GPU, which would then consider our chats in a Machine Learny Way, and then respond with a snarky reaction.

πŸ™„ πŸ€– We're gunning for Artificial Irony. πŸ€– πŸ™„

Install tensorflow on the NVIDIA Jetson Nano

Our Nano is still pretty lightly loaded. We tried to install several packages, hoping to follow something like the NVIDIA instructions to easily run Tensorflow. Yeah. Right.

The Woe

After about 38 minutes we've given up on the project entirely, realizing that the model we want to use only supports python 2, while the NVIDIA jetson nano (surprise!) wants us to install TF using python 3.

πŸ”₯ πŸš’ πŸ”₯

Successfully installed absl-py-0.9.0 astor-0.8.1 cachetools-4.1.0 google-auth-1.14.3 google-auth-oauthlib-0.4.1 google-pasta-0.2.0 grpcio-1.29.0 keras-preprocessing-1.1.2 markdown-3.2.2 oauthlib-3.1.0 opt-einsum-3.2.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-oauthlib-1.3.0 rsa-4.0 scipy-1.4.1 tensorboard-2.1.1 tensorflow-2.1.0+nv20.4 tensorflow-estimator-2.1.0 termcolor-1.1.0 werkzeug-1.0.1 wrapt-1.12.1


$ python3
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
2020-05-17 00:03:42.129454: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.2'; dlerror: libcudart.so.10.2: cannot open shared object file: No such file or directory
2020-05-17 00:03:42.129529: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-05-17 00:03:45.107357: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2020-05-17 00:03:45.107613: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2020-05-17 00:03:45.107673: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
>>>

BUT WAIT THEY'RE STILL WORKING ON IT! Here is a forked repo with active contribution, working toward supporting python 3 and TF 2.0. Note that it's TF 2.0. Not TF 2.1.

Here is the related PR on the main DeepMoji repo.

Not So Good

Sort of fail. Try TF 2.0.0.

Others are having this same issue

THIS DOES NOT REALLY WORK, VERSIONS INCOMPATIBLE: We probably need to line up versions just so...

THIS DOES NOT REALLY WORK, CRASHES: Or we could potentially run in a container

Looks like one sure-fire fix is to build from source. In which case we would need golang so that we can install bazel

And yes, we must build bazel from source, because we're on an ARM64 platform. πŸ€“

https://github.com/bazelbuild/bazel/issues/8833#issuecomment-629039759

FINALLY: PLEASING SCROLL

We really needed some of this scroll. After a bit of a struggle, we're compiling Tensorflow 2.1.0 on our NVIDIA Jetson Nano and may yet be able to Hello.

Configuration finished
userhost:~/git/tensorflow$ bazel build //tensorflow/tools/pip_package:build_pip_package
Extracting Bazel installation...
Starting local Bazel server and connecting to it...

yo theres even color

Come Back 65,536 Hours Later...

...and the Nano is still compiling Tensorflow 2.1. That's right, 2.1. We completely forgot that there's likely no support by DeepMoji for this version - we should have compiled 2.0.

That's fine. It doesn't matter. Nothing matters. (OK, PLENTY OF THINGS MATTER, calm down.) But yeah, we're sticking with the latest version of Tensorflow and we're just going to see how things play out.

Here's what's up, we've already downloaded half the internet during this compile.

And finally IT FAILS!

You Need a Rest Break

πŸ’€ πŸ’€πŸ’€

pdate(highwayhash::HH_U64, const char*, highwayhash::HH_U64, State*)':
external/highwayhash/highwayhash/compiler_specific.h:52:46: warning: requested alignment 32 is larger than 16 [-Wattributes]
 #define HH_ALIGNAS(multiple) alignas(multiple)  // C++11
                                              ^
external/highwayhash/highwayhash/state_helpers.h:49:41: note: in expansion of macro 'HH_ALIGNAS'
   char final_packet[State::kPacketSize] HH_ALIGNAS(32) = {0};
                                         ^~~~~~~~~~
ERROR: /home/user/.cache/bazel/_bazel_user/9fef75a91d3167b0d0af1e5d464b12c3/external/aws-c-common/BUILD.bazel:12:1: C++ compilation of rule '@aws-c-common//:aws-c-common' failed (Exit 1)
external/aws-c-common/source/arch/cpuid.c:23:10: fatal error: immintrin.h: No such file or directory
 #include <immintrin.h>
          ^~~~~~~~~~~~~
compilation terminated.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 2988.839s, Critical Path: 67.64s
INFO: 1146 processes: 1146 local.
FAILED: Build did NOT complete successfully

πŸ’€ πŸ’€πŸ’€

Cheer Up

Look, this isn't our first rodeo.

Let's just try compiling TF for the next 40 hours using this guide. This individual has even figured out TF 2.0!

We don't have to sit and watch every step of the process.

We don't have to even shut down KataGo. That would be weird.

If it works, it works. If it doesn't, FINE!

Resolution

We failed to succeed in compilation of TF 2.0 on NVIDIA Jetson Nano. Thank you.

Trying Google Cloud Run

This is just a quick try-out of Google Cloud Run.

We will build minimal containers that serve Hello World text over HTTP using both Rust πŸ¦€ and Deno πŸ¦•.

Google provides very nice documentation to help get started.

Using Rust on Google Cloud Run

We expect to be able to run Hello World in rust using a tiny docker build image. The rust musl builder from emk & community is simply wonderful in this regard. You can create a tiny little rust program in a tiny little docker container. When you're counting your cloud provider's storage quota in terms pennies and cents, that's a very good thing.

We compared the docker image size of a trivial Hello World built using the primary rust docker image to the docker image size of the same program built using the rust musl builder. Here are the results:

docker images |grep traditional
docker images |grep tiny-musl-guy

▢️ ▢️ ▢️

traditional   ...(trimmed)...     1.6 GB
tiny-musl-guy ...(trimmed)...    9.22 MB

Buddy Have An HTTP

You need to serve some sort of minimal HTTP request in order for cloud run to do anything with your program. We arbitrarily picked the Hello World example from tide.

#[async_std::main]
async fn main() -> Result<(), std::io::Error> {
    let mut app = tide::new();
    app.at("/").get(|_| async { Ok("Hello, world!") });
    app.listen("127.0.0.1:8080").await?;
    Ok(())
}

Actually Have Google Do The Cloud Things Now

This part is refreshingly straightforward. First, we installed the gcloud command line utility. Dead simple.

Building a Docker Image in GCR

The remotely-created docker build will just work on the first try, because Google loves you. (Does that sound creepy? πŸ€”)

We built it remotely using gcloud, after which it is automatically stored in the Google container registry:

gcloud builds submit --tag gcr.io/PROJECT_NAME/rust-gcr-demo

We received a very comforting stream of text indicating forward progress while engaging with google's cloud in this way:

Creating temporary tarball archive of 8 file(s) totalling 2.0 KiB before compression.
Uploading tarball of [.] to [gs://PROJECT_NAME_cloudbuild/source/9999999999.34-aaaaaad7496d917b02d0cdd1b30a.tgz]
Created \[https://cloudbuild.googleapis.com/v1/projects/PROJECT_NAME/builds/e0e0e0e0-aaaa-ffff-928b-9a9a9a9a9a9a\].
Logs are available at \[https://console.cloud.google.com/cloud-build/builds/e0e0e0e0-aaaa-ffff-928b-9a9a9a9a9a9a?project=000000000000\].
--------------------------------- REMOTE BUILD OUTPUT ---------------------------------
starting build "e0e0e0e0-aaaa-ffff-928b-9a9a9a9a9a9a"

FETCHSOURCE
Fetching storage object: gs://PROJECT_NAME_cloudbuild/source/9999999999.34-9999999999999999999999.tgz#8888888888888888
Copying gs://PROJECT_NAME_cloudbuild/source/999999999934-9999999999999999999999.tgz#8888888888888888...
/ [1 files][  1.5 KiB/  1.5 KiB]
Operation completed over 1 objects/1.5 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon  9.728kB
Step 1/6 : FROM hayd/alpine-deno
f5a1a52ea0cc: Pull complete
91f70b368d00: Pull complete
Digest: sha256:68c60a649aadbb7139bd4fd248eb7426d3f829f4574ce78ebc97fc3e22059498
Status: Downloaded newer image for hayd/alpine-deno:latest
 ---> 4b8d66a41759
Step 2/6 : WORKDIR /var/hello
 ---> Running in 78f3fc613ff3

...SNIP...

Step 6/6 : CMD ["run","--allow-env","--allow-net","index.ts"]
 ---> Running in 45e58559d684
Removing intermediate container 45e58559d684
 ---> 3ecf00d3c661
Successfully built 3ecf00d3c661
Successfully tagged gcr.io/PROJECT_NAME/rust-gcr-demo:latest
PUSH
Pushing gcr.io/PROJECT_NAME/rust-gcr-demo
The push refers to repository [gcr.io/PROJECT_NAME/rust-gcr-demo]
5ca1b5bf7698: Preparing

...SNIP...

f8d7c190aaa1: Pushed
3b7d1dad260a: Pushed
5216338b40a7: Layer already exists
517a217ae4f9: Pushed
70c4b84bb2a4: Pushed
latest: digest: sha256:e1563eb2fdcba0a36b23c34050a2145b47833ce607adc912ee750348bc512e9$
 size: 1573
DONE
--------------------------------------------------------------------------------------$

ID                                    CREATE_TIME                DURATION  SOURCE
                                                                          IMAGES
                         STATUS
e0e0e0e0-aaaa-ffff-928b-9e9a9a9a9a9a  2020-05-15T23:18:50+00:00  20S       gs://PROJECT_NAME_cloudbuild/source/6666666666.34-aaaaaaaaaaaaaaaaaaaaaa.tgz  gcr.io/PROJECT_NAME/rust-gcr-demo (+1 more)  SUCCESS

The build output is tiny. When we finally added tide to the hello world example pictured below, the artifact was closer to 4.4MB... but you get the idea.

here is a pic of a tiny build, having been uploaded

Deploying to Google Cloud Run

Deployment is also painless. Make sure you bind to the host 0.0.0.0 in your application, and obey the PORT variable expected by GCR. We are lazy and just hardcoded port 8080 in our code, ignoring best practices for the sake of a quick, throwaway learning experience.

gcloud CLI deployment remains simple:

gcloud run deploy --image gcr.io/PROJECT_NAME/rust-gcr-demo --platform managed

And there is another pleasing interaction waiting for you in your terminal:

Service name (rust-gcr-demo):

Allow unauthenticated invocations to [rust-gcr-demo] (y/N)?

Deploying container to Cloud Run service [rust-gcr-demo] in project [PROJECT_NAME] region [us-east1]
βœ“ Deploying new service... Done.
  βœ“ Creating Revision...
  βœ“ Routing traffic...
Done.
Service [rust-gcr-demo] revision [rust-gcr-demo-00001-qow] has been deployed and is serving 100 percent of traffic at https://rust-gcr-demo-somenonsense-xx.z.run.app

Comparison to a Deno Hello World

We did the same thing over again

Deepdream

Deepdream is cute. It's fun to play with yesterday's AI advancements.

Resources

This is the Deepdream Colab Notebook, it its original location on Seedbank.

The newer version is hosted on AIHub.

It can transform strange things into even stranger artifacts.

It's based on TF 1.x, so you need to make sure you use the old version of the lib before starting your notebook.

%tensorflow_version 1.x

Experimenting

You can then do some transformations.

Waking Image

This input

Dreamy Image

This output had all the default settings for step #4, but we boosted strength to ~500.

An output example

Dreamier Image

This output had further-jiggled settings:

  • Layer: mixed4d_3x3_bottleneck_pre_relu
  • Strength: 500
  • iter_n: 25

Output

Resources for Setting up mdBook

Resources for Setting up Github Pages

ffmpeg

πŸ€·β€β™‚

crop video

You can crop video using ffmpeg.

ffmpeg -i in.mp4 -filter:v "crop=out_w:out_h:x:y" out.mp4

e.g. crop a 1080x800 section starting at 0,1000

ffmpeg -i in.mp4 -filter:v "crop=1080:600:0:900" out.mp4

Shrink Video

You can shrink video.

 ffmpeg -i in_big.mp4 -vcodec libx265 -crf 24 out_small.mp4

Fun Stuff

This chapter contains things that are hopefully fun.

Music Directory

We hope you find these to be easy on the ears, with the notable exception of references to Merzbow.

Interesting Apps and Resources

Radio Garden: Explore Live Music

Videos Online

Fallout Shelter Cheat: 9999 Lunch Boxes

This cheat works for Xbox One and requires Microsoft Windows 10. It was discovered in the Genie Fallout Shelter 9999 Lunch Boxes video.

  • You must be logged into an XBox account shared by both your Windows 10 and Xbox One.
  • You need to download and install Fallout Shelter onto a Windows 10 machine using Microsoft Store. The account must be linked to your Xbox One.
  • Link to %APPDATA% and go up to the local dir (not Roaming), then find e.g. BethesdaSoftworks.FalloutShelter_AAAAkfvn8vvvv\SystemAppData\wgs\000900000CB0BE87_000000000000000000000000DEADBEEF\BEEFBEEFBEEFBEEFBEEFBEEFBEEFBEEF7
  • Use the vault file listed at Genie Fallout Shelter 9999 Lunch Boxes video or access the edited vault file here.
  • Replace one of your games with that vault file.
  • Load the game on your Windows 10 machine, hit save, and wait for Xbox routines in Windows 10 to sync your data.
  • Quit Windows 10 and start Fallout Shelter on your Xbox One. Your file should sync automatically and you should have lots of available lunch boxes.

Please visit Genie's YouTube channel and show some love for their contribution: this is a really fun exploit!

Source Code for this Book

The source code for this book is available at Github.

This book is Licensed under GNU GPLv3.

Providing Feedback

Feedback may be provided by opening an issue on our Github project page.