Here’s a quick cheatsheet on setting and reading environment variables across common OS’s and languages.
When developing software, it’s good practice to put anything you don’t wish to be public, as well as anything that’s “production-environment-dependent” into environment variables. These stay with the local machine. This is especially true if you are ever publishing your code to public repositories like Github or Dockerhub.
Good candidates for environment variables are things like database connections, paths to files, etc. Hosting platforms like Azure and AWS also let you easily set the value of variables on production and testing instances.
I switch back and forth between Windows, OSX and even Linux during development. So I wanted a quick cheatsheet on how to do this.
Writing Variables
Mac OSX (zsh)
The default shell for OSX is now zshell, not bash. If you’re still using bash, consider upgrading, and consider using the great utility “Oh My Zsh.”
and add export BASEBALL_TEAM=Seattle Mariners to this file. Be sure to restart a new terminal instance for this to take effect, because ~./zshenv is only read when a new shell instance is created.
bash shell (Linux, older Macs, and even Windows for some users)
export BASEBALL_TEAM=Seattle Mariners
echo $BASEBALL_TEAM
Seattle Mariners
printenv
< prints all environment variables >
// permanent setting
sudo nano ~/.bashrc
// place the export BASEBALL_TEAM=Seattle Mariners in this file
// restart a new bash shell
Windows
Right click the Windows icon and select System
In the settings window, under related settings, click Advanced System Settings
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
Provisioning a PFX key for Azure when you’ve got intermediate certificates involved.
This is a technote for myself for the future and perhaps a note that’ll help people searching for this obscure but important and confusing tech roadblock.
The Problem
I’m working on a web app. It’s hosted at Azure, a platform that grows more and more impressive every day.
As we all know by now, any respectable web app needs a full https trusted connection, which is best for customer privacy, and now appropriately mandated by Google, Facebook and the like.
Now, with plain-vanilla web apps, Azure has a very handy SSL provisioning process which does the grunt-work for you, and gets basic SSL certificates up and running and even custom domain names assigned. I recommend it.
But with my app, I’ve got some specific reasons beyond the scope of this article as to why I needed to provision this certificate manually, which have to do with the fact that it’s an Angular Universal app, running on Node/Express on the back end, with an Azure Application Gateway on the front-end, which doesn’t yet appear to robustly support automatic certificate provisioning.
The point is, to get a fully trusted https (SSL/TLS) connection, you need to install both a main certificate and any intermediate certificates at the server. Or more accurately, you need a single certificate with both the “main” certificate and any “intermediate” certificates.
At this writing, Azure’s Application Gateway service insists that this be a single certificate in PFX format.
Complicating matters further is that vendors like GoDaddy (which is the one I use) issue certificates in .crt and .p7b formats. You don’t get a PFX certificate from GoDaddy. You get a .crt (main) certificate and a .p7b (intermediate) certificate when GoDaddy thinks their work is “done.”
So there you are, needing a .PFX and only having (and having paid hundreds of dollars for) a .crt and a .p7b.
So, off to Google you go. You quickly find that you can convert the .crt to a .pfx, and that does appear to work.
But… and with Certificate Hell, there’s always a gotcha!… if you solely focus on the main certificate that GoDaddy (or others) issue, you’re given the illusion of it working but it really isn’t right, end-to-end.
It will appear to work on, say, Google Chrome, presenting you with the nice shiny lock icon. But then, you head on over to Facebook’s “Linter” (their social media debugger), and it will bark at you that the SSL of your site is invalid. So will more complete tools like SSL Labs Validity Checker.
That’s because it’s missing the Intermediate Certificate. This certificate is essentially telling the Internet that GoDaddy has the rights to issue certificates for domains, and that this “main” certificate has been issued by the real GoDaddy itself.
And with so many steps in this process, the error messages you get don’t always point in the right direction. Because why would you want to do that if your goal is to maximize complexity?
The Goal
In general, you need a certificate which contains the full “chain” of trust, and in my case, it included both the final certificate as well as GoDaddy’s intermediate certificate. This all needs to be bundled into a single PFX file in the end.
That’s what most Azure services want — a final PFX that includes any intermediate certificates in the chain.
Too Much Confusing Jargon
The maddening bit about all this is that the documentation seems written by people from NASA, and uses overly complex lingo and jargon. Even the many helpful people on StackOverflow and the like make all kinds of assumptions about what you know and what you don’t.
So how do you get it all to work?
I spent much of this weekend trying various approaches. So many dead-ends. Such a pain!
But I finally solved it. Here are the steps.
First, buy an “SSL Certificate” from a vendor, such as GoDaddy. At GoDaddy at least, buying an SSL certificate gives you a “slot” to begin the process. You tell it the domain name you want. Better yet, go with a wildcard domain; you should always think about buying a wildcard like *.domainname.com, because in many cases, you’ll very likely want to secure subdomains.
This gives you a spare “SSL Certificate” process to initiate in the “My Products” area. Click on that to get started.
Go to a Windows machine and open up a terminal. Make sure you have openssl downloaded and installed somewhere on your system PATH.
Note to self: DO NOT try this process using the IIS initiation process, documented in a lot of places on the Web. That for me was one of many dead-ends and confusing aspects to the whole thing. You can complete the entire process with openssl at a terminal, and uploading to GoDaddy, downloading the certificates, and some conversion. Doing so, you will not need any of the “Add/Remote Snap In” MMC gobbledygook.
You do not need to touch the remote server until you have the final .PFX file. It is perfectly OK to initiate an SSL certificate signing request on a development machine and uploading it to the final (say, Azure-hosted) server later. A certificate, once issued, can be uploaded to different machines.
Note also that I only had luck on Windows after trying both OSX and Windows, but I do know from elsewhere on Stack Overflow that it works on OSX too. OSX should work fine, but in a couple of attempts I had difficulties — very likely user error.
First, generate yourself a private key:
openssl genrsa -out myserver.key 2048
That’ll create a file called “myserver.key” in your current directory. This is your private key, hashed to your machine that you generated it from, known only to you. It says “I am the one creating this request.”
Use openssl to generate a certificate signing request:
The prompts during the Certificate Signing Request generation aren’t clear at all. But it’s VERY important what you enter when generating the Certificate Signing Request.
The most important thing is what you enter in the “Common Name” field. This should be your domain name (e.g., domainname.com) or your wildcard (*.domainname.com).
For good measure, I put it as Organization Name as well.
Once it saves a .csr (certificate signing request) file, upload that CSR, including the —- BEGIN CERTIFICATE REQUEST —– and —– END CERTIFICATE REQUEST —— markers to initiate the request.
After five-ten minutes of verification, you’ll be issued a download payload. Grab it in IIS format.
Included in that file will be a main certificate (marked .crt) and an Intermediate certificate (maddeningly in a different format, .p7b.)
Don’t be scared of .pem and .crt files — they’re just text files. You’re going to open your main .crt file in Notepad++ and APPEND the intermediate certificate from GoDaddy from their repository, saving them both as a file called, say, both.crt.
The Crucial Step: Use Notepad++ to Combine the Certificate Files — Your Main One FOLLOWED by the Intermediate One
You are literally pasting a text file of the two certificates, including the begin/end markers, one after the other. Save that file as something like “both.crt” (meaning, both your main certificate and your intermediate certificate.)
Note that you need this file in pure UTF-8 format. Windows gives you files downloaded in UTF-8-BOM format which is NOT recognized by the openssl tool and you will get an error if you try to use it. In my experience, this can also hork your requested files, so double-check your work before you continue. You should have a file called “both.crt” which includes the full contents of your issued main “.crt” file, followed by the godaddy intermediate certificate above. (Again, that’s only if your SSL cert is issued by GoDaddy and only if it relies upon an intermediate certificate.)
Be sure to use Notepad++ to check the encoding of the text file, and if necessary, switch the encoding type to just UTF-8. It matters. openssl is very finicky, and hates input files in other formats, and your process may fail, requiring you to start all over again.
Once you have everything in a single place, you can get the needed PFX file with this command:
Finally, you should have a completed .pfx file, which includes not just the final certificate but the full chain of authority. (Facebook and Firefox both require full chains of authority, so if you only include the final certificate, it may work on some browsers but others will find it “untrustworthy”.)
This final .PFX file can then be uploaded to Azure. And voila! Use the Facebook Linter and force a refresh of the page and it’ll be happy again.
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
I have an old private blog from more than a decade ago that I’m shutting down. It was basically a semi-private journal, related to the construction of our home. It has a lot of useful photographs for me on it — i.e., in-progress construction shots, design inspiration shots, sketches and renderings, plumbing and electrical rough-ins, etc. I wanted to archive them.
I tried a few free tools to download the images. I found various freeware programs as well as free Chrome extensions, but for some reason they all seem to stop at the “thumbnail” image, and did not crawl through to get the full, highest-resolution image.
Enter Scrapy
Nowadays, when there’s a quick “power tool” automated task you’d like to perform, there’s very likely a python library that’ll help with it. So I was happy to discover the excellent Scrapylibrary, which is a spider/crawling framework.
There’s also BeautifulSoup, and, in the .NET world, HTMLAgilityPack, which are very good at scraping pages… but Scrapy comes with a full spidering framework, letting you crawl the website to fetch what you want with minimal code.
Using Python 3.7+, first, create a virtual environment, since python library conflicts are a pain in the neck:
virtualenv venv
Activate the virtual environment; to do so, on Windows it’s:
.\venv\scripts\activate.bat
Then, it’s as simple as:
pip install scrapy
Then, simply create a python file in that directory with the code. Here’s mine; I called this file “blogimages.py”. Note that I took some quick-and-dirty shortcuts here because I just wanted this process to work for this specific one-time task. Obviously it can be further generalized:
import scrapy
import urllib.request
class ImageSpider(scrapy.Spider):
name = 'images'
start_urls = [
'http://sample-blog.blogspot.com/',
'http://sample-blog.blogspot.com/2005/01/',
'http://sample-blog.blogspot.com/2005/02/'
]
def parse(self, response):
# find matching image links -- grab all images
for imgurl in response.xpath("//a[contains(@href,'jpg')]"):
the_url = imgurl.css("a::attr(href)").extract()[0]
print(the_url)
filename = the_url[the_url.rfind("/")+1:]
print(filename)
filename = filename.replace("%","") #get rid of bad characters for filenames
filename = filename.replace("+","")
print("======")
urllib.request.urlretrieve(the_url, "images\\"+filename)
yield {
'url': imgurl.css("a::attr(href)").extract()[0]
}
#next_page = response.css('ul.archive-list li a::attr("href")').get()
#print("NEXT PAGE ==== ")
#if next_page is not None:
# yield response.follow(next_page, self.parse)
Make a subfolder called “images”, because, as you can see in the hacky code above, it’s going to try to save to that folder.
To run it, all you do is:
scrapy runspider blogimages.py -o images.json
That’s it! The images should now be in your images folder. You can certainly enhance this spider easily to automatically find and crawl the pagination links; I chose not to do that, because, well, I never put pagination links in the old blog — only an “archive” section with a table of contents. I simply fed a list of “start_urls” in the upper portion to traverse.
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
I’m really enjoying Angular and Azure, but getting them to work well together is sometimes a pain. Right now, there’s some real catch-up that Azure App Services needs to do in order to embrace anything beyond Angular 7.
That’s because Angular 8 and up (“Angular 8+”) requires Node 12.x, and the latest version of Node.js that Azure App Services supports is 10.6.0. As a result, you cannot do a “build” of an Angular 8+ App on Azure.
Angular is moving rapidly, and they’re already in 9.0 pre-release. I’ve recently moved my own projects to Angular 8.
And I’ve sadly discovered I cannot currently use the standard “Deploy from Github” method to deploy Angular apps to Azure, because the build (which takes place on Azure, using this method) will fail, with an error that the version of Node.js is too outdated. And no, you cannot easily install new versions of Node.js on Azure App Services, because one of the design goals of that App Services platform is to have a curated set of known services, not a “free machine in the cloud” that you can install anything and everything to. You can certainly choose different versions of Node, but you cannot choose anything compatible with Angular 8+ at this writing.
That’s a shame, because continuous integration is a really nice feature of Azure. With it, you do a simple check-in and push to GitHub on a designated deploy branch (I usually choose “master” for this purpose), and then Azure fetches the build from GitHub, does the build locally on your App Services instance, and, with the right tweaks to your deployment script, copies it to the right directory. (Such a build-and-deploy custom script is covered in a prior post on this blog.)
Yes, I could try dockerizing everything, but I really don’t want to do that. I prefer not to have docker always running on my dev machines, and for most of my smaller projects, it’s overkill, because I’m just deploying a Single Page Application and one API.
So, Back to FTP
In the meantime, I’m resorting to simple, tried-and-true FTP deployment with a little automation help from Python.
Remember, all Azure really needs is the contents of your “dist” folder, which is built with the command: “ng build –prod”.
I’ve written a simple python script to do the (Angular 8+) build locally on my dev machine (after testing locally of course), and then it will automatically deploy it via FTP succeeds. Since most of my Angular apps are pretty small (and I like to have a single Git repo with the API back-end and Angular front-end), the process completes far faster than the Continuous Integration approach. I wouldn’t recommend this for large-team, large-scale projects, but it works very well for smaller and especially solo projects.
I expect that eventually, the Azure team will upgrade the versions of Node they support to 12+. When that happens, I’ll likely switch back to Continuous Integration deployment.
If you’d like to go the “build locally and FTP the final ‘dist’ site” route, save this file (called deploybuild.py) into your Angular app’s “src” folder:
# this file is called deploybuild.py
import os.path, os
from ftplib import FTP, error_perm
host = 'azure.host.name.goes.here.azurewebsites.windows.net'
port = 21
ftp = FTP()
ftp.connect(host,port)
ftp.login('username','password')
ftp.cwd('/site/wwwroot')
# set filenameCV to your full, absolute path to the "dist" folder
# of the completed build on your local machine when you do
# an "ng build --prod"
filenameCV = "c:\\build\\your-project-folder\\dist"
def placeFiles(ftp, path):
for name in os.listdir(path):
localpath = os.path.join(path, name)
if os.path.isfile(localpath):
print("STOR", name, localpath)
ftp.storbinary('STOR ' + name, open(localpath,'rb'))
elif os.path.isdir(localpath):
print("MKD", name)
try:
ftp.mkd(name)
# ignore "directory already exists"
except error_perm as e:
if not e.args[0].startswith('550'):
raise
print("CWD", name)
ftp.cwd(name)
placeFiles(ftp, localpath)
print("CWD", "..")
ftp.cwd("..")
# first, do the build locally
buildResult = os.system("ng build --prod")
# if there is an error, stop
if (buildResult==1):
print("ERROR IN BUILD. NOT DEPLOYING.")
ftp.quit()
exit
placeFiles(ftp, filenameCV)
ftp.quit()
To run this script, once you’re ready to deploy your local build, you just issue:
python deploybuild.py
Note that you will get a “cannot rmdir <directory name>” error during the build if you happen to have the project open in another window (say, a manual FTP client.) That’s because “ng build –prod” first wants to clear-out the dist directory, and it cannot, if you have the window open in another application.
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
Immediately upon launch of the candidate-facing preview of ALIGNVOTE last Wednesday, I heard a great feature request from D4 candidate Heidi Stuber. Paraphrasing our exchange:
“I understand why multiple choice is great for finding a match, but often, multiple choice questions have a need for explanation as to why a particular stance was selected. Please allow me to elaborate on an answer.”
A few days later, I heard that request from two other candidates.
I told her by email then that I thought that her suggestion was a terrific and important feature, and that I’d implement it as quickly as I could, but that for various reasons I wanted to get the beta up and going. But no question, she’s right: some multiple choice answers can lose a great deal of nuance, and it’s important to offer candidates a chance to elaborate upon why they selected the option they did. Today, the day after the voter-facing public beta was announced, I’m pleased to report that this feature has been implemented on both the candidate-facing view and the voter-facing view. Candidates, now it’s your turn.
I’ve lamented before in a different context that nuance is in hibernation in America, and I definitely don’t like that trend. It’s happening on both the far right and the far left. Complex issues rarely have simple explainers or options. And I’m happy to do what I can to make room for nuance in a very tiny but important way.
One dilemma I thought about was how and when to present it to the voter. I also wanted to do it in such a way that it didn’t “push” the voter too much before they have had a chance to think and take their own stand. Sometimes, allowing a shift in our preconceived notion is an important part of politics and moving forward, and personally, I think it’s good to encourage pauses for reflection, and make space for moments where it can happen.
It’s important to note that by itself, the text of what’s written is not going to improve a match score, because ALIGNVOTE cannot read the voter’s mind and know whether that’s what they think too. It only sees which ultimate option is chosen and whether it matches the stance the candidate has, and applies the relative weighting of that issue to figure out how “closely aligned” on the questions provided they are. But reading this explanatory text on a position might be very informative to you about which option you should choose, and give you, the voter, a chance to compare brief and to-the-point rationales.
So here’s the way it works:
First, the Candidate Gets an Easy Way to Optionally, Briefly Elaborate
ALIGNVOTE sent all campaigns an email earlier today a link that lets them elaborate on any or all answers. Every single campaign got emailed this link, based upon the officially registered email address on file with the City of Seattle. They need to double-check SPAM folders or re-request it from us. (I cannot just send it to any old email address, for what I hope are understandable reasons.)
ALIGNVOTE indicates clearly that elaborating is optional, and the text of what’s written will not change the match-scoring, but it might encourage a voter to align with their stance on the issue, thus encouraging a greater match score and resultant ranking.
Elaborations are only allowed to those candidates who confirm their stances via the ALIGNVOTE surveys sent.
Personal note: Candidates, take a stand and choose an option; that’s a big part of what we’re electing you to do. One of the things ALIGNVOTE fights against is attempting to have one’s cake and eating it too. Candidates can update their elaboration at any time via their link, but stances are locked in unless they email us.
In the ALIGNVOTE Interview, the Voter Sees the Question and Ponders their Stance…
Let’s say they first choose “Might be a good idea.”
Candidate Voices are Revealed, if Present
There’s a “Candidate Voices” section that’s revealed, with a shuffled list of responses from candidates. ALL available candidate voices on the issue (not just the selected stance) are provided. There is no selection or filtering. If a candidate submits a sentence or two via their ALIGNVOTE survey, it is displayed. ALL candidate views are displayed; these are in no way filtered or curated or altered or hidden by ALIGNVOTE. You, the voter, see what the candidate wrote:
You, the voter are free to revise your choice or weighting based on the candidate’s official voice on the issue.
ALIGNVOTE shuffles the display order of candidate voices when one or more is present, so that one candidate doesn’t always get top billing or the last word. (At this writing, this is the first use of randomization in ALIGNVOTE. Known bug: Currently doesn’t shuffle until there are three or more elaborations in the list for a given issue. Addressing soon.)
If no candidate in the race offers an elaboration, no “Candidate Voices” subsection is displayed.
The elaborations are a maximum of 240 characters each, the current maximum length of a Tweet.
The elaborations can be updated at any time by the candidate if they so choose. This policy may change based upon logistical ease, but for now, that’s the case — it can be changed as issues or news warrant.
Only those candidates who have actually completed and submitted the ALIGNVOTE stance survey (officially confirming their stances) will have their statements displayed to voters.
For Candidates, It’s a Great Way to Get Your Message to Voters
High-propensity voters will be using this tool. Candidates shouldn’t miss the opportunity to get their stances and messages to voters who are pondering their own views on an issue. We strongly encourage all candidates to complete their ALIGNVOTE survey, and suggest that they use the new elaborations feature to reach voters with a succinct description of their rationale for their stances.
Please give busy candidates and campaigns the benefit of the doubt by noting that this is a brand new feature, only hours old. Only hours ago did all candidates receive the ability to actually enter their stances, so it understandably may take time for them to craft the right 240 character stances. That’s reasonable.
To see the feature in action, visit the D4 Race, as the candidate who suggested this feature has already provided her comments.
Other Updates Released Today
We are still in beta and will be for a while.
Explanatory text on the weighting slider
Privacy Policy
Answers to a few questions
No changes to the match scoring algorithm have been made since we went live. None. Zero. If you’re seeing better rankings (and I guess that actually translates to: “more in line with what I thought it would be”), as at least one person has mentioned on Twitter, it might be because candidates have taken a moment to confirm/update their stances, which just makes everything better and helps us be more informed about where they stand. (Thanks, candidates!)
At present, the only place where randomization is used in ALIGNVOTE is the shuffling of the order of candidate elaborations.
Traveling Soon, Slow Response Very Likely Late June Until Early July
I do have some long-planned personal commitments and travel that will likely keep me away from the computer for a couple weeks at the end of June. This was planned well before the idea for ALIGNVOTE even began. I wish I could stay here frankly. My other commitments and travel will cause some interruption and slow response during this period, I just want to be up front about that. There will be a very busy July through early August, for sure, and I’ll very much be around for that.
At this writing, 1,600+ candidate score matches have been done by the platform. No question, many of these are dupes by the same person kicking the tires and checking it out. But it’s also good to remember that in our last District Level Election (2015), some races were decided by mere dozens of votes.
Thanks for the great and constructive feedback and I hope this tool is useful in narrowing down a few candidates with whom to connect.
Update: AGREES/DISAGREES Indicator
In taping a segment to be aired on TV next week, a reporter noted to me that he got confused about the display of the “Candidate Voices” section. Though there is clear hover text over the info-button, he was under the impression that only the candidate voices who agreed with the stance the voter had selected would be shown.
That’s incorrect — ALL candidates who have provided elaborations are shown, regardless of the answer provided. Just like a live candidate forum, we do no filtering of what they say. It’s displayed immediately for all candidates once they submit their questionnaire. They have the ability to edit or change those elaborations at any time.
In addition, we are now making it clear whether the candidate who provided an elaboration AGREES or DISAGREES with the stance the voter has tentatively selected, as follows:
This is only done where an elaboration is provided. For now, in this beta, this is done by design. That’s because, just like a forum, we want voters to hear more than just a yes or no, we’d like them to put a short statement of support for why they chose the option they chose.
The AGREES/DISAGREES indicator simply tells the voter at a glance whether the candidate agrees with what the voter is choosing or not. And it’s available for all candidates who provide elaboration text.
The benefit for candidates of course is that they can get the justification out there to voters for WHY they feel the way they do about an issue. The benefit for voters is that they get to read it, and can see at a glance whether the candidate agrees with their stance or not.
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
Have you ever been to an anniversary or birthday celebration which included video well-wishes from friends and family? Or, have you ever wanted to collect a series of video testimonials from customers?
If you’ve ever tried to gather a bunch of videos from people, you know it’s not easy. It’s a hassle to nudge people, it’s a hassle for them to record a response, upload it somewhere, send you a link. You invariably get it in all kinds of different formats and locations. And nowhere is the information easily sortable, searchable, taggable or organized.
I wanted to do something about that. I’ve launched a new, free tool called popsee which allows you to gather videos easily, from anyone with either a webcam (desktop or laptop) or an iPhone/iPad.
How It Works
Popsee is now in alpha, and only supports one use-case (the townhall described below.) But the basic steps are:
Curator gets a coded weblink which they can send anywhere
End user following that link can easily respond via webcam and any browser, or iPhone/iPad. There’s no Android app get.)
popsee does basic validation for you — on things like video length, etc. End-users can re-record clips as many times as they’d like before uploading.
As videos roll in, curator gets a handy dashboard to manage and sort them. Curator can download movies in standard movie formats and edit as they wish.
Uses
Birthdays, weddings, anniversaries and celebrations
Conferences
Townhall style forums
Product Testimonials
Auditions
…more
I wanted an easy way for any curator gather and organize videos from a group of people.
Origin Story
A citizen group I’m part of, SPEAK OUT Seattle!, is organizing a series of townhall-style candidate debates for an upcoming city election. As part of this townhall series, I volunteered to film a series of questions from citizens from around Seattle to be projected on the big screen.
When I started to think about the effort involved in driving around Seattle to collect about 80 videos, it dawned on me just how many people have webcams and good-quality smartphones, and that this technology can really help with the sourcing or “audition” process.
Most important, I wanted the tool to be easy. I wanted it to also include simple “metadata” that the curator wanted; in this case, the question in written form, and contact information.
I was surprised at the lack of tools to allow a curator to initiate a video request from a group of people via, say, a specially-coded weblink (like a shortened URL.)
Sure, you can write an email or do a Facebook post and ask people to record a video and upload it to YouTube and send the link, or maybe put a bunch of videos in Dropbox, but I wanted something point-and-click simple, and I wanted it to optionally include simple survey questions based upon what the curator wants. And when old-style videos do arrive, I wanted them to arrive in searchable format, with “metadata” such as their contact information, email, or perhaps what the subject is. Over time, I’ll be looking at automatic transcription tools, search and indexing tools, word clouds and more. I wanted a platform where a survey-initiator can build a simple survey, with one or more of these questions being submitted by video.
Meaning: it’s being used just for the SPEAK OUT Seattle event.
The free iOS app is in review by Apple and should be available in the next two weeks. This app currently just lets you respond to popsee requests; I expect it will allow you to initiate them some time later this year.
I’ll be building out a great dashboard for the curators, which will include the ability to kick off new requests. If you’d like to try it out, follow and send a DM to @popseea on Twitter.
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
Using Git to manage your code? We all know about git init, commit, etc. But here are some handy git commands for common scenarios:
Clone from GitHub
git clone {github url}
Unlink your Local Repo from GitHub
Ever want to use an example app on GitHub as a starting point for your application or component, and want to unlink it from the original GitHub address?
This can be done in two steps:
git remote rm origin
This will delete the config settings from .git/config.
Then, issue
rm .git/FETCH_HEAD
to get rid of the FETCH_HEAD, which still points to GitHub.
Help with the Most Common Git Commands
git help everyday
Search your Change Log
Where was that checkin that I did which altered the “foo” class?
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
The process of deploying an Angular 6+ application to Azure is pretty Byzantine, but here are the basic steps to get it going.
I really like Microsoft Azure, but the process of deploying an Angular 6+ app is complex, and filled with some very big hidden gotchas. As of this writing, you cannot just spin up a web app instance, push Angular code to it from a repository and have it run; you need to take several steps first.
Here are the essential steps. Note that another and in many cases better approach is to use containers, such as Docker, but I’m not going to cover that here. In this post, I’ll walk through running your Angular app directly on an Azure web app instance.
Before you begin
Make sure your production build is working on your local machine:
ng build --prod
This command should create a dist folder right off the root of your Angular project. If it doesn’t build on your machine, it’s not going to build once it’s deployed to Azure. Normally in development you’re using:
ng serve
but this only runs Angular on the development server, which is not suitable for production environments.
Jot down a couple things from your local build machine. Let’s find out which version of Node you are running (and npm version), because you’ll want to replicate it on Azure. On your local machine, open up a command shell, and type:
node -v
In my case, I get: v10.6.0 at this writing. So I’ll be remembering “10.6.0” as a value when I tell Azure what version of Node I want it to use in the “Application Settings” step below.
Make these modifications to your local Angular app:
Azure’s going to need to know how to build your app. To do this, open up package.json and copy a few “devDependencies” into the main root “dependencies” section. Here are the ones that were important to my project’s successful on-Azure build:
While still in package.json, look at the “scripts” section toward the top of the file. Make you have an entry for “build” : “ng build –prod”. You’ll see in the deploy.cmd file below that we rely upon that command to do the actual build on Azure. You do not commit the dist folder to GitHub; rather, you commit the code source, and Azure first copies over the GitHub code, then it runs an ng build –prod, and it copies the results of the dist folder to the Azure production directory that you define in Application Settings.
Note too that the “start” command was removed, which seems to confuse Azure; you don’t want Azure trying to “ng serve” your application, because that would try to invoke the development server, which is not suitable for production.
Let’s Create an Azure App
We need a Web App instance to work with. We will then configure the Application Settings to tell it which version of Node to run, what kind of deployment script to run after it sucks in all those files from GitHub when you publish a commit, and we will tell Azure where to look for the app directory.
Using the Azure Portal, create a web app.
Once that’s deployed, go to the Azure Application Settings for that web app, and update two key variables:
App setting name: “WEBSITE_NODE_DEFAULT_VERSION” Set this value to indicate what version of Node Azure should spin up for you. I like to match my local dev machine exactly (nope, folks, I’m not yet using Docker for small deployments.) You can find this value on your local machine by opening up a command prompt and typing “node -v”.For me, at this writing, it’s 10.6.0, so that’s what I set the value of this variable to in Azure.
You also need to tell Azure where to find the code to run, and it’s not going to save in the default directory — it’s going to save in the default directory appended with your project name. At the very bottom of the Application Settings area, you’ll see a section called “Virtual applications and directories”. You need to tell Azure where to go for the main virtual path of your site. It will generally be site\wwwroot\{your-angular-project-name} That is, that’s what it will be if you follow the custom DEPLOY.CMD command below.
Next, you’ll need to teach Azure’s web service how to build and deploy your app, once it pulls in all that code from GitHub.
To do this, you create two files — one called “.deployment” and the other called “deploy.cmd.”
In root level of your Angular project, you’ll need to create a custom deploy.cmd file which builds the “dist” folder and uses that as the main production code.
To do this, we want to use a custom deploy.cmd, not just the standard Azure one. It does a production build, and then uses the results of the standard Angular dist folder as the main website code.
Here’s what the files look like. The first filename .deployment (note the period before the word), and it belongs in the root of your Angular project. This simply looks like this:
[config]
command = deploy.cmd
The second filename is the actual deployment script, called deploy.cmd, and it looks like this below. Note in particular line 61, which copies from the dist folder after completing that production build.
Note that this file below not the plain vanilla deploy.cmd — it’s been slightly modified.
@if "%SCM_TRACE_LEVEL%" NEQ "4" @echo off:: ----------------------
:: KUDU Deployment Script
:: Version: 1.0.17
:: ----------------------:: Prerequisites
:: -------------:: Verify node.js installed
where node 2>nul >nul
IF %ERRORLEVEL% NEQ 0 (
echo Missing node.js executable, please install node.js, if already installed make sure it can be reached from current environment.
goto error
)
:: Setup
:: -----
setlocal enabledelayedexpansion
SET ARTIFACTS=%~dp0%..\artifacts
IF NOT DEFINED DEPLOYMENT_SOURCE (
SET DEPLOYMENT_SOURCE=%~dp0%.
)
IF NOT DEFINED DEPLOYMENT_TARGET (
SET DEPLOYMENT_TARGET=%ARTIFACTS%\wwwroot
)
IF NOT DEFINED NEXT_MANIFEST_PATH (
SET NEXT_MANIFEST_PATH=%ARTIFACTS%\manifest
IF NOT DEFINED PREVIOUS_MANIFEST_PATH (
SET PREVIOUS_MANIFEST_PATH=%ARTIFACTS%\manifest
)
)
IF NOT DEFINED KUDU_SYNC_CMD (
:: Install kudu sync
echo Installing Kudu Sync
call npm install kudusync -g --silent
IF !ERRORLEVEL! NEQ 0 goto error
:: Locally just running "kuduSync" would also work
SET KUDU_SYNC_CMD=%appdata%\npm\kuduSync.cmd
)
goto Deployment
:: Utility Functions
:: -----------------
:SelectNodeVersion
IF DEFINED KUDU_SELECT_NODE_VERSION_CMD (
:: The following are done only on Windows Azure Websites environment
call %KUDU_SELECT_NODE_VERSION_CMD% "%DEPLOYMENT_SOURCE%" "%DEPLOYMENT_TARGET%" "%DEPLOYMENT_TEMP%"
IF !ERRORLEVEL! NEQ 0 goto error
IF EXIST "%DEPLOYMENT_TEMP%\__nodeVersion.tmp" (
SET /p NODE_EXE=<"%DEPLOYMENT_TEMP%\__nodeVersion.tmp"
IF !ERRORLEVEL! NEQ 0 goto error
)
IF EXIST "%DEPLOYMENT_TEMP%\__npmVersion.tmp" (
SET /p NPM_JS_PATH=<"%DEPLOYMENT_TEMP%\__npmVersion.tmp" IF !ERRORLEVEL! NEQ 0 goto error ) IF NOT DEFINED NODE_EXE ( SET NODE_EXE=node ) SET NPM_CMD="!NODE_EXE!" "!NPM_JS_PATH!" ) ELSE ( SET NPM_CMD=npm SET NODE_EXE=node ) goto :EOF :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Deployment :: ---------- :Deployment echo Handling node.js deployment. :: 2. Select node version call :SelectNodeVersion :: 3. Install npm packages IF EXIST "%DEPLOYMENT_SOURCE%\package.json" ( pushd "%DEPLOYMENT_SOURCE%" call :ExecuteCmd !NPM_CMD! install --production IF !ERRORLEVEL! NEQ 0 goto error popd ) :: 3. Angular Prod Build echo Building App next... echo DEPLOYMENT_SOURCE is %DEPLOYMENT_SOURCE% IF EXIST "%DEPLOYMENT_SOURCE%/angular.json" ( echo Building App in %DEPLOYMENT_SOURCE%… pushd "%DEPLOYMENT_SOURCE%" call :ExecuteCmd !NPM_CMD! run build :: If the above command fails comment above and uncomment below one :: call ./node_modules/.bin/ng build --prod IF !ERRORLEVEL! NEQ 0 goto error popd ) :: 1. KuduSync IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" ( call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_SOURCE%/dist" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd" IF !ERRORLEVEL! NEQ 0 goto error ) :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: goto end :: Execute command routin that will echo out when error :ExecuteCmd setlocal set _CMD_=%* call %_CMD_% if "%ERRORLEVEL%" NEQ "0" echo Failed exitCode=%ERRORLEVEL%, command=%_CMD_% exit /b %ERRORLEVEL% :error endlocal echo An error has occurred during web site deployment. call :exitSetErrorLevel call :exitFromFunction 2>nul
:exitSetErrorLevel
exit /b 1
:exitFromFunction
()
:end
endlocal
echo Finished successfully.
Key Gotcha: both .deployment and deploy.cmd need to go in the ROOT of your GitHub Repository. (e.g., if you’ve got an Angular app project that’s a subfolder of your GitHub repo, still make sure you put those files in the ROOT of your GitHub repo, not the root of the Angular sub project.)
The last steps are fairly straightforward:
Get your Angular App hosted in a GitHub repository. There are several articles about this, here’s one.
Tell Azure to deploy the app from GitHub (article.) Essentially, you go to the the Azure Portal, choose the web app, go to “Deployment” and set up “Deploy from GitHub.” It’s pretty straightforward if you already have your code in GitHub.
That’s it! Once you do those steps, you should then be able to get code working locally, commit it to GitHub, wait for a few minutes as Azure fetches the code from GitHub and builds it, and then serves it up on request to users visiting your website. Note again that it’s doing a production build, which is a higher level of code quality checking than the debug builds, so if your build suddenly starts failing on Azure, it’s probably not Azure’s fault, it’s probably because your production build doesn’t build locally. Best to do an ng build –prod before deploying to GitHub.
Of course, this is not a “best practice” for production builds — you should be deploying to one or more staging slots, running your test suites on the staging site, and flipping the slots into production, among other things, but that’s not in the scope of this post.
Routing
If you’re hosted on IIS, you’ll also want this web.config file shipped up to your Azure host so that Angular can take over the routing.
Troubleshooting Tips
In the “Deployment” blade for your app on Azure, you should see it fetching code from GitHub when you do a checkin to the configured branch. If it doesn’t you might try detaching the connection and re-attaching the GitHub deployment.
If you see it building and then giving a “Failed” message, check the Logs link and scroll through the output. If it says it cannot find any files in the d:\…\dist directory, it means the build did not succeed on Azure. The log file, as cluttered as it sometimes can be, should tell you about component (or components) that are missing. Add those to the “dependencies” section of package.json and check in a new build. Sometimes the first deployment is a bit of an iteration dance, because it will usually fail after the first one and not report all the missing libraries. Think of Azure as basically starting from scratch in the build process — it starts with a clean working directory each time it pulls builds from GitHub, then runs an ng build –prod to build the dist folder.
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.
Steve’s a Seattle-based entrepreneur and software leader, husband and father of three. He’s American-Canadian, and east-coast born and raised. Steve has made the Pacific Northwest his home since 1991, when he moved here to work for Microsoft. He’s started and sold multiple Internet companies. Politically independent, he writes on occasion about city politics and national issues, and created voter-candidate matchmaker Alignvote in the 2019 election cycle. He holds a BS in Applied Math (Computer Science) and Business from Carnegie Mellon University, a Masters in Computer Science from Stanford University in Symbolic and Heuristic Computation, and an MBA from the Harvard Business School, where he graduated a George F. Baker Scholar. Steve volunteers when time allows with Habitat for Humanity, University District Food Bank, Technology Access Foundation (TAF) and other organizations in Seattle.