Introducing bigthanks.org

Late last week, a few of us started discussing ways we could help out local restaurants that are devastated by the COVID crisis. My friend Michelle came up with the brilliant idea to raise money to buy gratitude meals for first responders, to help express just some of our thanks for the work they’re doing and will do.

I think a lot of people want to help in various ways, and have lots of different constraints (financial, time, movement, etc.) I thought it’d be helpful to have a place where people who do have a desire to reach out have a good place to share ideas and be inspired. We’re calling the all-volunteer organization #bigthanks. It’s about putting gratitude into action. It’s available at bigthanks.org.

Yesterday we delivered hot meals to 200+ health care workers and 100+ first responders.

Today we feed another 50+ in the El Centro de la Raza community on Beacon Hill.

Through the generous support of a handful of donors, we’ve raised over $4,000 so far to help support a second-generation downtown restaurant, their kitchen staff and other food service workers in town: some much needed revenue.

Head Chef Ronnie and Shannon

We’re learning a lot about how to do this, particularly how to do this in a responsible, socially-distanced and safe way.

#bigthanks is a hashtag that’s about turning our gratitude into productive action. We’d love to hear your ideas.

Please join us at bigthanks.org.

#bigthanks

#wegotthisseattle

https://bigthanks.org/…/gratitude-meals-for-first-responders

50+ Ideas for Your Socially Distanced Spring

At this writing in Seattle, amid the COVID-19 outbreak, nearly all schools have announced closure, and have gone (or are soon going) to remote learning. Many workplaces have also done the same, and public officials are encouraging people not to gather in large groups.

First, please read this excellent summary from Tomas Pueyo as to why we need to implement social distancing now, not in the future, but now.

I think it may be apocryphal, but I’m told that when the Chinese write the word “crisis,” they write it as two characters which literally mean “dangerous opportunity.”

What are the opportunities to be productive and happy while under “stay at home” restrictions? Here’s a starting list.

Feel free to add your own in the comments section below.

  1. Organize office and filing cabinets
  2. Weed out closets for eventual donations
  3. Binge watch: I can heartily recommend Succession, OZARK, The Crown, McMillions$
  4. Read: for non-fiction, I can heartily recommend Bad Blood, United States of Arabia, Devil in the White City, Unbroken, and Battle Cry of Freedom. You?
  5. Master making bread. Have you tried making No Knead bread in a Dutch Oven? Amazing.
  6. Take an online course. Udemy.com has terrific online learning; I’ve done courses on Angular, Python, React, Swift and more. There’s always something to learn.
  7. Learn or improve your musical instrument skills. Piano? Guitar?
  8. Hand-write a letter or two to a friend or relative.
  9. Start that blog or podcast that you’ve always wanted to
  10. Organize a Google Hangout or WebEx/Zoom meeting to teach something you know to your kids’ class.
  11. Review documents you’ve stored in boxes, shred and purge
  12. Not to be morbid, but consider finally writing down that document of what loved ones should do if you should fall ill or be incapacitated for any reason, and make sure they know where it is.
  13. Start a video blog
  14. Purge T-shirts from previous years’ fun-runs that I will never wear for yard work.
  15. Eat dinner as a family and share stories. Maybe show the family slideshow.
  16. Break out the jigsaw puzzles.
  17. Check in on elderly and immune-compromised neighbors especially. Make sure they know how to reach you. Can you get them food or deliveries? Do they have a contact plan in place?
  18. Consider offering to advance-pay those whose businesses may be disrupted by this virus (dog-walkers, housecleaning, hell, even your favorite restaurants with gift certificate, etc.)
  19. Advance plan your Christmas or Holiday gift list for 2020. Your November/December self will thank you!
  20. Finally make that checklist of yearly home maintenance tasks, so you don’t forget anymore what needs to happen every October, December, or April.
  21. Build something cool with Raspberry Pi or Arduino, maybe with your kids if you have ’em. Here’s an Enchanted Rose with Falling Petals I made. And here’s a Photo Booth.
  22. Update that household budget, or do a pie chart of what you spend on what. Consider tools like Mint and SigFig.
  23. Organize your digital files, and finally get that backup strategy in place. What’s your backup strategy for photos, in particular?
  24. Enter your favorite family recipes on a tool like BigOven (https://bigoven.com) and share the link with family members.
  25. Start a garden
  26. Watch the upcoming Paris to Nice bike race this week, “The Race to the Sun”
  27. Fix any home technology that might not be working
  28. Move/ switch out the art on the walls. Makes your home feel fresh!
  29. Take long walks or a jog outside and enjoy the trees and flowers that are starting to bloom. You don’t have to be a shut-in, but you should keep a distance and avoid large groups, washing hands before and after.
  30. Thinking of all our “Grand Friends” that are now isolated and hunkered down [Editor’s note: these are assisted living residents that many of our kids made connections with via their school.] How about sending snail mail? Thinking kids can write letters and send art. Let’s spread some love.
  31. Print digital photos and put in frames/ photo books
  32. Wash the windows!
  33. Research family history
  34. Finally apply for citizenship if you’ve got a foreign parent
  35. Ask your favorite neighborhood restaurant if you can prebuy food, say, a gift certificate. If they say they don’t offer them, ask for the email contact of the owner and email them. They’ll likely appreciate it.
  36. Remember that hikes in the wilderness are (for now at least) totally fair game. Avoid surface contact at restrooms etc. AllTrails is terrific.
  37. Organize cupboards and pantry. Labels really help!
  38. Have your children wash your car
  39. Change passwords on online accounts and/or get a password manager like LastPass and deploy it
  40. Exercise
  41. Teach your kids important things they need time to study but don’t necessarily learn in school – the stock market, checkwriting, bank cards, billpaying, etc.
  42. Fix that nagging loose doorknob that’s been bugging you
  43. Document common home fixes or routine maintenance items via video recording, perhaps even put it into a blog
  44. Set up a Network Access Storage device like Synology and centralize ALL home photos and video on it, with an offsite backup strategy
  45. Look for items to donate in your home; you might not be able to bring them to Goodwill immediately, but get them put aside, bagged, boxed or labeled for easy donation later.
  46. Plan your next vacation, but maybe don’t yet book it
  47. Get your long-postponed earthquake preparedness kit together (yes, other disasters don’t care much about COVID-19)
  48. Figure out some key metrics of your household spending — e.g., how much do you spend on dining out? Do you know the percentage? You might be surprised.
  49. Write down five goals you’d like to achieve in the next ten years, and have your spouse/partner do the same. See how they compare.
  50. Write an encouraging note to your neighbor.
  51. A couple months after this COVID crisis has passed, I’ll be hosting a “Drink, Talk Learn” party. So I’ll be working on a presentation for that. Consider making a 3 minute Powerpoint deck on any subject you’re passionate about.
  52. Interview an older relative for StoryCorps via the StoryCorps app, on any aspect of their life.
  53. Donate to your favorite charities, or research new ones in the fields you’re most passionate about.
  54. Get the family bikes out one by one, and get them ready to ride.
  55. Clean out the garage (should be OK to do solo if not in contact with COVID surfaces.) Use mask and wipes to be sure; lots of surfaces to be sure.
  56. There are approximately one bazillion clever craft or make-at-home projects on Pinterest and Instructables. Choose one.
  57. Break out a great board game with the family or neighbor. Some good ones are Ticket To Ride, Settlers of Catan and Codenames.

Introducing popsee – easy video surveys

Have you ever been to an anniversary or birthday celebration which included video well-wishes from friends and family? Or, have you ever wanted to collect a series of video testimonials from customers?

If you’ve ever tried to gather a bunch of videos from people, you know it’s not easy. It’s a hassle to nudge people, it’s a hassle for them to record a response, upload it somewhere, send you a link. You invariably get it in all kinds of different formats and locations. And nowhere is the information easily sortable, searchable, taggable or organized.

I wanted to do something about that. I’ve launched a new, free tool called popsee which allows you to gather videos easily, from anyone with either a webcam (desktop or laptop) or an iPhone/iPad.

How It Works

Popsee is now in alpha, and only supports one use-case (the townhall described below.) But the basic steps are:

  1. A curator creates a popsee. Think of this as a short video survey.
  2. Curator gets a coded weblink which they can send anywhere
  3. End user following that link can easily respond via webcam and any browser, or iPhone/iPad. There’s no Android app get.) 
  4. popsee does basic validation for you — on things like video length, etc. End-users can re-record clips as many times as they’d like before uploading.
  5. As videos roll in, curator gets a handy dashboard to manage and sort them. Curator can download movies in standard movie formats and edit as they wish.

Uses

  • Birthdays, weddings, anniversaries and celebrations
  • Conferences
  • Townhall style forums
  • Product Testimonials
  • Auditions
  • …more

I wanted an easy way for any curator gather and organize videos from a group of people.

Origin Story

A citizen group I’m part of, SPEAK OUT Seattle!, is organizing a series of townhall-style candidate debates for an upcoming city election. As part of this townhall series, I volunteered to film a series of questions from citizens from around Seattle to be projected on the big screen.

When I started to think about the effort involved in driving around Seattle to collect about 80 videos, it dawned on me just how many people have webcams and good-quality smartphones, and that this technology can really help with the sourcing or “audition” process.

Most important, I wanted the tool to be easy. I wanted it to also include simple “metadata” that the curator wanted; in this case, the question in written form, and contact information. 

I was surprised at the lack of tools to allow a curator to initiate a video request from a group of people via, say, a specially-coded weblink (like a shortened URL.)

Sure, you can write an email or do a Facebook post and ask people to record a video and upload it to YouTube and send the link, or maybe put a bunch of videos in Dropbox, but I wanted something point-and-click simple, and I wanted it to optionally include simple survey questions based upon what the curator wants. And when old-style videos do arrive, I wanted them to arrive in searchable format, with “metadata” such as their contact information, email, or perhaps what the subject is. Over time, I’ll be looking at automatic transcription tools, search and indexing tools, word clouds and more. I wanted a platform where a survey-initiator can build a simple survey, with one or more of these questions being submitted by video.

But currently, it’s a Minimum Viable Product ready for some testing.

Status

It’s in alpha testing.

Meaning: it’s being used just for the SPEAK OUT Seattle event.

The free iOS app is in review by Apple and should be available in the next two weeks. This app currently just lets you respond to popsee requests; I expect it will allow you to initiate them some time later this year.

I’ll be building out a great dashboard for the curators, which will include the ability to kick off new requests. If you’d like to try it out, follow and send a DM to @popseea on Twitter.

Learn more at https://popsee.com.

Send In Ideas

I’d love to hear your ideas and scenarios for requesting videos from people. How can it be made easier for you? Tweet your ideas to @popseeA.

Neural Style Transfer – Current Models

I’m working on a neural-style transfer project, and have several machine learning models trained to render input photos in particularly styles.

The current set is below; input image on the left, output image on the right, with model name in lower right hand corner.

I’ve got a few clear favorites, but I’d love to see if they match yours. Which 3-5 do you like?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

Elektro, the Smoking Robot of 1937

I’ve always been fascinated by past visions of the future. Science fiction uses the future to tell us something about ourselves, so looking back on past visions of the future, we can learn something about that age and the values, myopia, optimism and fears of the time. It’s also healthy to continually do cross-checks on “how accurate was that prediction” and “what did we miss?” so that we can improve the accuracy of futuristic predictions over time.

Lost in the drama and bloodshed of the WWII age is the story of Elektro, the Smoking Robot.

In an era when we should have been much more focused on the rise of authoritarianism and threats to freedom, we human beings actually built, at great time and expense, a robot that could respond to basic voice commands, talk, distinguish red from green, do confined movements and smoke a cigarette.

Built by Westinghouse in Mansfield, Ohio, in 1937, Elektro was a 7-foot, 250-pound star of the 1939 World’s Fair. Elektro responded to voice commands of the operator, which did basic syllabic recognition. The chest cavity lit up as it recognized each word. Each word set up vibrations which were converted into electrical impulses, which in turn operated the relays controlling eleven motors. What mattered was how many impulses were sent by the operator, not what was actually said.

Check out this video to see a full demonstration of what Elektro could do, from The Middleton Family at the New York World’s Fair:

The Tin Man was to make his appearance on film that year, in the 1939 release The Wizard of Oz.

Meanwhile, across the Atlantic, Hitler was set to invade Poland. Alan Turing was off taking mathematics seminars by Wittgenstein in Cambridge, England. His Enigma decoding efforts had not yet begun. But those efforts would, within 3 years, help usher into existence the age of the computing and the programmable machine.

Elektro didn’t house any real software, aside from pre-recorded audio. He also didn’t learn anything — what Elektro could do was entirely predetermined by engineers through circuitry, relays and actuators.

Elektro could:

  • “Recognize” basic spoken words — actually, just distinguish between the number of impulses
  • Do basic audio output (via 78rpm record player)
  • “Walk” and move his hands (thanks to nine motors)
  • Recognize red or green
  • …and of course, smoke

A series of words properly spaced selected the movement Elektro was to make. His fingers, arms and turntable for talking were operated by nine motors, while another small motor worked the bellows so the giant could smoke. The eleventh motor drove the four rubber rollers under each foot, enabling him to walk. He relied on a series of record players, photo voltaic cells, motors and telephone relays to carry out its actions. It was capable to perform 26 routines (movements), and a vocabulary of 700 words. Sentences were formulated by a series of 78 RPM record players connected to relay switches.

Elektro did his talking by means of recordings, thanks to 8 embedded turntables, each of which could be used to give 10-minute talks. Except for an opening talk of about a minute, his other speeches were only a few seconds long. A solenoid activated by electrical impulses in proportion to the harshness or softness of spoken words makes Elektro’s aluminum lips move in rhythm to his speech-making.

Millions stood in line for as many as three hours to watch Elektro during his 20-minute performances at the 1939-40 World’s Fair in New York City.

The hole in Elektro’s chest was deliberate, since Westinghouse wanted visual proof that no one was inside. As commands were spoken to him, one of two lightbulbs in his chest would flash, letting the operator know he was receiving the signals. He could turn his head side to side and up and down. He talked and his mouth opened and closed. His arms moved independently with articulated fingers.

He also smoked. An embedded bellows system let him puff on a cigarette, which was lit by his operators. Apparently, one of the operators trained to work Elektro (John Angel, shown below) used to smoke a pipe, but then quit when he saw how much buildup was in Elektro during the cleaning after each day.

Elektro was later joined by a robotic dog, Sparko:

After the World’s Fair, the two embarked on a cross-country journey. Apparently, a female companion was planned for Elektro, but when World War II broke out, aluminum was in short supply, Westinghouse was needed on many projects, and the plans to build one were cancelled.

Applying Artist Styles to Photographs with Neural Style Transfer

In 2015, a research paper by Gatys, Ecker and Bethge posited that you could use a deep neural network to apply the artistic style of a painting to an existing image and get amazing results, as though the artist had rendered the image in question.

Soon after, a terrific and fun app was released to the app store called Prisma, which lets you do this on your phone.

How do they work?

There’s a comprehensive explanation of two different methods of Neural Style Transfer here on Medium; I won’t attempt to reproduce the explanation here, because he does such a thorough job.  The author, Subhang Desai, explains that there are two basic approaches, the slow “optimization forward” approach (2015) and the much faster “feedforward” approach where styles are precomputed (2016.)

On the first “straightforward” approach, there are two main projects that I’ve found — one based on Pytorch and one based on Tensorflow. Frankly, I found the Pytorch-based project insanely difficult to configure on a Windows machine (I also tried on a Mac) — so many missing libraries and things that had to be compiled. The project was originally built for specific Linux-based configurations and made a lot of assumptions about how to get the local machine up and running.

But the second project (the one linked above) is based on Google’s Tensorflow library, and is much easier to set up, though from Github message board comments I conclude it’s quite a bit slower than the Pytorch-based project.

On-the-fly “Optimization” Approach

As Desai explains, the most straightforward approach is to do an on-the-fly paired learning of two images — the style image and the photograph.

The neural network learning algorithm pay attention to two loss scores, which it mathematically tries to minimize by adjusting weights:

  • (a) How close the generated image is to the style of the artist, and
  • (b) How close the generated image is to the original photograph.

In this way, by iterating multiple times over newly generated images, the code generates images that are similar to both the artistic style and the original image — that is, it renders details of the photograph in the “style” of the image.

I can confirm that this “optimization” approach — iterating through images takes a longtime. To get reasonable results, it took about 500+ iterations. The example image below took 1 hour and 23 minutes to render on a very fast CPU equipped with a 6Gb NVIDIA Titan 780 GPU.

I’ve used the neural-style transfer Tensorflow code written by Anish Athalye to transform this photo:

…and this artistic style:

…and, with 1,000 iterations, it renders this:

Faster “Feedforward” Precompute-the-Style Approach

The second and much faster approach is to precompute the filter based on artist styles (paper). That appears to be the way that Prisma works, since it’s a whole lot faster.

I’ve managed to get Pytorch installed and configured properly, and don’t need any of the luarocks dependencies and hassle of the main Torch library. In fact, a fast_neural_style transfer example is available via the Pytorch install, in the examples directory.

Wow! It worked in about 10 seconds (on Windows)!

Applying the image with the “Candy” artistic style rendered this image:

Here’s a Mosaic render:

…also took about 5 seconds or so. Amazing. The pre-trained model is so much faster! But on Windows, I had a devil of a time trying to get the actual training of new style models working.

Training New Models (new Artist Styles)

This whole project (as well as other deep learning and data science projects) inspired me to get a working Ubuntu setup going. After a couple hours, I’ve successfully gotten an Ubuntu 18.04 setup, and I’m dual-booting my desktop machine.

The deep learning community and libraries are mostly Linux-first.

After setting up Ubuntu on an NVIDIA-powered machine, installing PyTorch and various libraries, I can now run the faster version of this neural encoding.

Training “Red Balloon” by Paul Klee

To train a new model, you have to take a massive set of input training images, a “style” painting, and you tell the script to effectively “learn the style”. This iteratively tries to minimize the weighted losses between the original input image and output image and the “style” image and the output image.

During the training of new models (by default, two “epochs”, or iterations through the image dataset), you can see the loss score for content and style (as well as a weighted total). Notice that the total is declining on the right — the result of the training using gradient-descent in successive iterations to minimize the overall loss.

Screenshot at 19-33-26

I had to install CUDA, which is the machine learning parallel processing library written by the clever folks at NVIDIA. This allows tensor code (matrix math) to be parallelized, harnessing the incredible power of the GPU, dramatically speeding up the process. So far, CUDA is the de-facto “machine learning for the masses” GPU library; none of the other major graphics chip makers have widely used libraries.

Amazingly, once you have a trained artist-style model — which took about 3.5 hours per input style on my machine — each rendered image in the “style” of an artist takes about a second to render, as you can see in the demo video below. Cool!

For instance, I’ve “trained” the algorithm to learn the following style (Paul Klee’s Red Balloon):

red-balloon-1922

And now, I can take any input image — say, this photo of the Space Needle:

spaceneedle

And run it through the pytorch-based script, and get the following output image:

out10klee

Total time:

(One-time) Model training learning the “Paul Klee Red Balloon Style”: 3.4 hours

Application of Space Needle Transform: ~1 second

Another Example

Learning from this style:

rendering the Eiffel Tower:

looks like this:

Training the Seurrat artist model took about 2.4 hours, but once done, it took about 2 seconds to render that stylized Eiffel Tower image.

I built a simple test harness in Angular with a Flask (Python) back-end to demonstrate these new trained models, and a bash script to let me train new models from images in a fairly hands-off way.

Note how fast the rendering is once the model is complete. Each image is generated on the fly from a Python-powered API based on a learned model, and the final images are not pre-cached:

Really very cool!

 

original image:

style:

Image result for rain princess

Output Image:

Netgear ORBI – This is the WiFi You Are Looking For

Steve, the wifi is down.” As the go-to guy in the house for all tech issues, I’ve been hearing that call, and reading that SMS text from family members for more than a decade. I’ve come to dread those words. In recent years, it’s been all-too-frequent. And since the longest-running wifi configuration in our house has been not one but three different SSID’s, the chances that one or more zones were down at any given time were high. To get fast wifi throughout the home, some approaches I’ve taken in the past have included:

  • One router and multiple access points
  • One router with repeaters range extenders
  • A combination of the above
  • Replacing the entire system with the AMPLIFI Mesh Network

Using multiple access points and networks has the problem of complexity — we end up with various networks in our house and devices which need to hop onto their local “best” signal. Some client devices tend to get confused finding the best signal, and it gets frustrating. The second approach is simplest, with a single broadcast and multiple “range extenders”, but it comes at the cost of speed. Since part of the 2 or 5Ghz radio spectrum is used to relay the signal back to the base router, performance can easily halve with every repeater in the chain. And the reliability there, too, has not been good — frequently a repeater will go offline or seem to “forget” its state of the world, especially when a base unit reboots. So I’ve also given the AMPLIFI mesh network a try. And, while it’s a great product, it didn’t seem to get along well with SONOS.

Enter NETGEAR Orbi

I’ve finally found what I think is the best wifi system for our home: the NETGEAR Orbi Ultra-Performance Whole Home Mesh WiFi System.

NETGEAR’s Orbi Base-Satellite Combination is Easy to Set up and Fast

Easy Onboarding, Great signal.

I was very impressed by the onboarding, and one day in at least, these speeds are amazing. I’ll update this post in a few weeks/months to tell you whether it’s working well. The NETGEAR Orbi Ultra-Performance Whole Home Mesh WiFi system is off to a terrific start, and couldn’t be happier. These things have very strong signals, and they use just one SSID (network name) throughout the entire network, meaning your phone or device stays connected no matter where you move in the house. I’m very impressed by Internet speeds I’m getting off of “satellite” stations — right now in my office I’m seeing speeds north of 190Mbps, which is two to three times faster than I was getting before.

I’m very pleased with it so far — the app has a handy display of network topology and connected devices. The web-based admin panel has far more control, and appears to have all the features of a typical high-end NETGEAR router (port forwarding, blocking, DDNS, etc.)

A key difference between the ORBI system and range extenders is that ORBI has its own private 5Ghz backchannel that it uses for “backhaul” to the router, so you don’t lose any significant speed at the satellite location. And I love the fact that we’re now back to a single SSID through the whole house — you can move from room to room, even outside, and the SSID stays the same.

As for the Satellites, at this writing they are all connected in a wireless topology to the base station. I haven’t yet tried the “backhaul via Ethernet” configuration — right now I have a base station and three satellites (the maximum allowed.) Everything appears to be running smoothly.

Here’s a good video review:

https://www.youtube.com/watch?v=56U6DkoxHv8

(I was not paid anything for this endorsement; I simply love the product so far, as it appears to be finally solving a long-running headache.)

RECOMMENDED. The Orbi Ultra-Performance Whole Home Mesh WiFi System.

Update: One Week In

Wow! Super-fast speed and NO problems. So far, so good. Strong recommendation. Very happy that I might finally have found the solution which works.  

Two Weeks In

Not a single restart, nor disconnect, nor satellite “forgetting” its state. I love this product! Strong recommendation.

Updating PLEX Media Server

Here’s how to update Plex Media Server on a server running Ubuntu.

Make sure you replace the URLs and packages with the latest release.

  • Find the URL for the latest Plex Media Server package here.
  • SSH into your server. 
  • Download the latest package (replace filename with the latest), then install it:
wget https://downloads.plex.tv/plex-media-server/0.9.13.4.1192-9a47d21/plexmediaserver_0.9.12.4.1192-9a47d21_amd64.deb
sudo dpkg -i plexmediaserver_0.9.12.4.1192–9a47d21_amd64.deb

Following installation, remove the installer file with this command:

rm plexmediaserver_0.9.12.4.1192–9a47d21_amd64.deb

Remember that you don’t have to type the entire filename, just the first few letters and press <tab> to complete it.