Category: How To

How to do something.

Using Zapier and OmniFocus to stay on top of meetings

(If you like this post, you might also like Using Zapier to import GitHub issues into OmniFocus.)

I use OmniFocus for personal task-tracking and Zapier to automate work when possible.

One way I use these two tools in combination is to automatically create tasks when a new event is created on my work calendars.

The two tasks look like this:

  • Meeting Prep: Call with Robert Smith, with a due date the day before the meeting is schedule.
  • Call with Robert Smith, with a due date the date the meeting is scheduled.

In OmniFocus this looks something like:

The reasons I create two tasks in OmniFocus is because I use a few applescripts to generate daily and weekly task reports and forecasts and I like to have a record of my meetings as well as the prep work I do for them in those reports.

What you need to setup this integration:

Optional tools include Hazel or Lingon to automate the running of the ParseInbox applescript.

Before I continue, thank you to Joe Buhling for sharing his collection of OmniFocus scripts!

Part 1: Create Zap

Step 1: Google Calendar app, New Event trigger

First, select the Google Calendar app and the New Event trigger:

Next you’ll need to configure the connection to your Google account (if you haven’t already):

Then you’ll select the specific calendar from which to retrieve new events:

That’s it for this step. Don’t forget to test the step before moving on to be sure you can retrieve an event.

Step 2: Formatter app, Date/Time trigger

In this step, we’ll take the start date of the meeting and subtract one day to get the due date for our Meeting Prep task.

Select the Formatter by Zapier app and the Date/Time trigger:

Next, you’ll set the following fields:

  • Transform: Add/Subtract Time
  • Input: Step 1 Event Begins (Pretty)
  • Expression: -1 day
  • To Format: Use a Custom Value (advanced)
  • Custom Value for To Format: MMM D, YYYY

(The script I’m using to parse OmniFocus Inbox items doesn’t handle date formats with times well which is why in this step we’re also formatting the date as MMM D, YYYY.)

 

Continue and be sure to test the step before moving on to the next one. The output should look something like:

Jun 8, 2017

Step 3: Formatter app, Date/Time trigger

This step is similar to Step 2 except that we aren’t going to modify the date, just format it so it works with the script we’ll use to parse our OmniFocus Inbox.

Select the Formatter by Zapier app and the Date/Time trigger as before. This time you’ll select Format as the Transform value:

  • Transform: Format
  • Input: Step 1 Event Begins (Pretty)
  • To Format: Use a Custom Value (advanced)
  • Custom Value for To Format: MMM D, YYYY

Continue and be sure to test the step before moving on to the next one. The output should look something like:

Jun 8, 2017

Step 4: OmniFocus app, Create Task action

In this step we’ll create the first of our two tasks, this one for Meeting Prep.

First, select the OmniFocus app:

Select OmniFocus app for Step 4.
Select OmniFocus app for Step 4.

Now select the Create Task action for the OmniFocus app:

Select create task action for OmniFocus app.
Select create task action for OmniFocus app.

Next you’ll need to connect your OmniFocus account if you haven’t already and select which connection you’d like to use.

Next, set up the create task action. You’ll configure only the Title field as such:

--Meeting Prep: Step 1 Summary @Meeting Prep ::Project name #Step 2 Start Datetime Pretty //Step 1 HTML Link

Let’s break this down:

  • The -- sets the name of the task.
  • The @ sets the context.
  • The:: sets the the name of the project.
  • The# sets the due date.
  • The// sets the text of the note.

Two notes:

  • The name of the project is fuzzy matched against flatted name of folders and projects, so you don’t need to use a colon between folder and project name.
  • With the applescript I’m using to parse OmniFocus’ Inbox, I had trouble with dates including time, so this is why I simplify the due date format in Step 2 and 3 of the Zap.

For details on the syntax used for parsing the inbox, see this post.

Here’s what the the task looks like in Zapier:

As always, test the action before proceeding to make sure everything looks right before continuing on. You test should look something like this:

Step 5: OmniFocus app, Create Task action

In this step we’ll create the second of our two tasks, this one for the meeting itself.

As in Step 4, select the OmniFocus app and Create Task action.

The Title field for this Create Task action is slight different as is the due date:

--Step 1 Summary @Meetings ::Project name #Step 3 Start Datetime Pretty //Step 1 HTML Link

Here’s what it looks like in Zapier:

Next, test the action before proceeding to make sure everything looks right before continuing on. Your output should look something like this:

Part 2: Parsing tasks in OmniFocus’ Inbox

Step 1: Manually run the ParseInbox script

For this part, if you haven’t already, you’ll want to grab a copy of the AutoParser scripts from either the original author or myself.

The repositories linked to above contain a collection of applescripts for use with OmniFocus. (Thank you Joe Buhling for putting these together!)

There are two main options for running the script manually.

Option 1: You can run any of the scripts from the command line with the osascript command:

/usr/bin/osascript "/Users/christie/Bin/OFScripts/Auto-Parser/ParseInbox.applescript"

Option 2: If you don’t want to use the command line to run scripts, you can copy the ParseInbox.applescript into OmniFocus’ scripts folder. To find out where this is, go to Help > Open Scripts Folder in OmniFocus and it will open a new finder window at that location. Once you do this, you’ll see Script: ParseInbox as an option in the View > Customize Toolbar… window. Drag this icon to your toolbar for ease of use.

When you run the ParseInbox script, it will transform the Inbox task Zapier created that looks like this:

--install certbot @GitHub ::sustainbility index project kick off #06/08/17 //https://github.com/numfocus/collab-infrastructure/issues/30

Into the task install certbot, belonging to the project Project Kick Off in the folder Sustainability Index. The task will now have a due date of 6/8/2017, and note text that includes a link back to the original GitHub issue:

Task in OmniFocus after it has been parsed from Inbox.
Task in OmniFocus after it has been parsed from Inbox.

If at this point you realize that your Zap isn’t quite configured correctly or exactly how you want it, you can go back and adjust it. And, if you get tired of waiting for OmniFocus to sync with the server to retrieve the new task, just remember you can copy and paste the test output from Step 4 of your Zap.

Step 2 (optional): Automatically running ParseInbox

This step is totally optional and you can skip it if you’re happy manually running the script when you want to parse Inbox items.

However, if you don’t want to have to remember to do this, or if you want OmniFocus to be able to process Inbox items while you’re out and about, then you’ll want to automate it.

There are a few options for doing this. They all require your computer be on, but OmniFocus doesn’t have to be open (the script will open it if closed).

Option 1 is to use Hazel to run the script when your OmniFocus has been updated. Joe explains how to configure this option on his blog here. I had mixed results with this method. The script seemed to run sometime and not others. YMMV.

Option 2 is to schedule the script using launchd (macOS’s version of cron). This involves editing plist files, which I hate doing, so I bought Lingon X to make this easy.

Here’s what my settings for Lingon look like:

Lingon settings for scheduling ParseInbox script.
Lingon settings for scheduling ParseInbox script.

And the plist generated by Lingon looks like this:

 

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>EnvironmentVariables</key>
	<dict>
		<key>PATH</key>
		<string>/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/MacGPG2/bin:/usr/local/sbin</string>
	</dict>
	<key>Label</key>
	<string>of.autoparser</string>
	<key>ProgramArguments</key>
	<array>
		<string>/usr/bin/osascript</string>
		<string>/Users/christie/Bin/OFScripts/Auto-Parser/ParseInbox.applescript</string>
	</array>
	<key>StartInterval</key>
	<integer>300</integer>
</dict>
</plist>

Using Zapier to import GitHub issues into OmniFocus

I use OmniFocus for personal task-tracking and Zapier to automate work when possible.  I also coordinate open source work on GitHub. Recently I was wondering if there was a way to make OmniFocus automatically import any GitHub issue assigned to me. It turns out there is!

What you need:

Optional tools include Hazel or Lingon to automate the running of the ParseInbox applescript.

Before I continue, thank you to Joe Buhling for sharing his collection of OmniFocus scripts!

Part One: Create zap on Zapier

If you haven’t already, enable OmniSync and create a Mail Drop email address. You’ll also need a GitHub account and there should be a newly created issue assigned to you.

To get started, while logged into Zapier, click MAKE A ZAP! button.

Step 1: GitHub app, new issue trigger

Select GitHub as the trigger app:

Screen capture of selecting "GitHub" as trigger app.
Select “GitHub” as trigger app.

Next, select New Issue as the GitHub trigger:

Screen capture of selecting "new issue" as GitHub trigger.
Select “new issue” as GitHub trigger.

Next, set up the GitHub new issue trigger according to your preferences:

Screen capture of setting up GitHub issue trigger according to your preferences.
Set up GitHub issue trigger according to your preferences.

In my Zap, I’ve selected Only issues assigned to you and for the time being, I’ve limited it to a single GitHub organization. What you select is up to you.

Next, test this step to ensure it is retrieving the data from GitHub that you expect it to. Before you run the test, Zapier will remind you to have a recently created issue that matches your trigger options:

Screen capture of testing your GitHub new issue trigger.
Test your GitHub new issue trigger.

Once everything looks good, save the step.

Next, you’ll create a formatter step to format any dates attached to GitHub issues via their assigned milestones.

Step 2: Formatter app, date/time action

The app you’ll select for this 2nd step is Formatter by Zapier:

Screen capture of selecting step 2 app as Formatter by Zapier
Step 2 app is Formatter by Zapier.

The action you’ll use for Formatter is Date / Time:

Screen capture of using the date action for app Formatter.
Use the date action for app Formatter.

Next, set up the Date / Time action. You’ll want to set the following fields accordingly:

  • Transform: Format
  • Input: Step 1 Milestone Due On
  • To Format: MM/DD/YY
Screen capture of configuring date transform.
Configure date transform action.

Now test the action to ensure the result is as you expect. You’ll see something like:

Test results for date formatter action.
Test results for date formatter action.

Once everything looks good, save the step.

Step 3: Code by Zapier app, run Python action

In this step, I’m using Python code to map GitHub repository names to OmniFocus folders and GitHub milestones to OmniFocus projects within those folders. If you have a different organizational scheme, you’ll want to modify the code in this step accordingly.

First, select the Code by Zapier app:

For Step 3, select the Code by Zapier app.
For Step 3, select the Code by Zapier app.

Next, select Run Python as the Code by Zapier action:

Select Run Python as Code by Zapier action.
Select Run Python as Code by Zapier action.

(You could also select Run Javascript and re-write the Python code below in Javascript.)

Next, configure the Input Data for use with our custom python code. You’ll want to set the following:

  • repo: Step 1: Repository Name
  • milestone: Step 1: Milestone Title

The names of the fields on the left doesn’t really matter, but it must match the key names we’ll use in our Python code.

Configure input data for custom python code.
Configure input data for custom python code.

Next, you’ll enter the following Python code into the Code field:

# want to set the project as
# repo milestone
# or just repo if no milestone

output = {'project' : input_data.get('repo').replace("-", " ")}

if input_data.get('milestone'):
    repo = input_data.get('repo')
    milestone = input_data.get('milestone')
    project = repo.replace("-", " ") + ' ' + milestone.replace("-", " ")
    output = {'project' : project}

If the issue being processed by Zapier has a milestone, this code sets project to Repository name milestone name, replacing any hyphens with spaces. Otherwise, it sets project to simply Repository name, also replacing any hyphens with spaces.

This works for me because I organize NumFOCUS projects in GitHub like this:

  • [repository] my-project
    • [milestone] milestone 1
    • [milestone] milestone 2

And in OmniFocus, I organize projects like this:

  • [folder] My Project
    • [project] Milestone 1
    • [project] Milestone 2

If you structure your projects differently, you’ll need to update the Python code above accordingly.

When you’re ready, test the Python code and check to see that it creates the expected output:

Results of testing custom Python code.
Results of testing custom Python code.

When everything looks good, save this step and continue on to creating the 4th and final step.

Step 4: OmniFocus app, create task action

In this last step you’ll configure the OmniFocus app to create an OmniFocus task with information from the retrieved GitHub issue.

First, select the OmniFocus app:

Select OmniFocus app for Step 4.
Select OmniFocus app for Step 4.

Now select the Create Task action for the OmniFocus app:

Select create task action for OmniFocus app.
Select create task action for OmniFocus app.

Next you’ll need to connect your OmniFocus account if you haven’t already and select which connection you’d like to use.

Next, set up the create task action. You’ll configure only the Title field as such:

--Step 1 Title @GitHub ::Step 3 Project #Step 2 Output //Step 1 Html Url

Let’s break this down:

  • The -- sets the name of the task.
  • The @ sets the context.
  • The:: sets the the name of the project.
  • The# sets the due date.
  • The// sets the text of the note.

Two notes:

  • The name of the project is fuzzy matched against flatted name of folders and projects, so you don’t need to use a colon between folder and project name.
  • With the applescript I’m using to parse OmniFocus’ Inbox, I had trouble with dates including time, so this is why I simplify the due date format in Step 2 of the Zap.
  • If you wanted to dynamically set the name of the context based on some attribute of the GitHub issue (e.g. label), you could do that by modifying the Run Python action in Step 3.

For details on the syntax used for parsing the inbox, see this post.

Here’s what the the task looks like in Zapier:

Configure create task omnifocus action.
Configure create task OmniFocus action.

As always, test the action before proceeding to make sure everything looks right:

Test create task OmniFocus action.
Test create task OmniFocus action.

If this looks good, click Create & Continue to create the task. Once you do this, flip over to OmniFocus and wait for the task to appear in your Inbox. It’ll look something like this:

New task in OmniFocus Inbox.
New task in OmniFocus Inbox.

Now you’re ready to setup the script to parse that monster-looking task out of your OmniFocus Inbox and into the right spot!

Part 2: Parsing tasks in OmniFocus’ Inbox

Step 1: Manually run the ParseInbox script

For this part, if you haven’t already, you’ll want to grab a copy of the AutoParser scripts from either the original author or myself.

The repositories linked to above contain a collection of applescripts for use with OmniFocus. (Thank you Joe Buhling for putting these together!)

There are two main options for running the script manually.

Option 1: You can run any of the scripts from the command line with the osascript command:

/usr/bin/osascript "/Users/christie/Bin/OFScripts/Auto-Parser/ParseInbox.applescript"

Option 2: If you don’t want to use the command line to run scripts, you can copy the ParseInbox.applescript into OmniFocus’ scripts folder. To find out where this is, go to Help > Open Scripts Folder in OmniFocus and it will open a new finder window at that location. Once you do this, you’ll see Script: ParseInbox as an option in the View > Customize Toolbar… window. Drag this icon to your toolbar for ease of use.

When you run the ParseInbox script, it will transform the Inbox task Zapier created that looks like this:

--install certbot @GitHub ::sustainbility index project kick off #06/08/17 //https://github.com/numfocus/collab-infrastructure/issues/30

Into the task install certbot, belonging to the project Project Kick Off in the folder Sustainability Index. The task will now have a due date of 6/8/2017, and note text that includes a link back to the original GitHub issue:

Task in OmniFocus after it has been parsed from Inbox.
Task in OmniFocus after it has been parsed from Inbox.

If at this point you realize that your Zap isn’t quite configured correctly or exactly how you want it, you can go back and adjust it. And, if you get tired of waiting for OmniFocus to sync with the server to retrieve the new task, just remember you can copy and paste the test output from Step 4 of your Zap.

Step 2 (optional): Automatically running ParseInbox

This step is totally optional and you can skip it if you’re happy manually running the script when you want to parse Inbox items.

However, if you don’t want to have to remember to do this, or if you want OmniFocus to be able to process Inbox items while you’re out and about, then you’ll want to automate it.

There are a few options for doing this. They all require your computer be on, but OmniFocus doesn’t have to be open (the script will open it if closed).

Option 1 is to use Hazel to run the script when your OmniFocus has been updated. Joe explains how to configure this option on his blog here. I had mixed results with this method. The script seemed to run sometime and not others. YMMV.

Option 2 is to schedule the script using launchd (macOS’s version of cron). This involves editing plist files, which I hate doing, so I bought Lingon X to make this easy.

Here’s what my settings for Lingon look like:

Lingon settings for scheduling ParseInbox script.
Lingon settings for scheduling ParseInbox script.

And the plist generated by Lingon looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>EnvironmentVariables</key>
	<dict>
		<key>PATH</key>
		<string>/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/MacGPG2/bin:/usr/local/sbin</string>
	</dict>
	<key>Label</key>
	<string>of.autoparser</string>
	<key>ProgramArguments</key>
	<array>
		<string>/usr/bin/osascript</string>
		<string>/Users/christie/Bin/OFScripts/Auto-Parser/ParseInbox.applescript</string>
	</array>
	<key>StartInterval</key>
	<integer>300</integer>
</dict>
</plist>

 

Making a Podcast, Step 2: Gather your recording equipment

Note: This post is third in a series where I share what I’ve learned starting and producing the Recompiler podcast. If you haven’t already, start with the introduction. This post follows Step 1: Identify a Topic, Point of View, and Structure.


Step 2: Gather your recording equipment: Computer, microphone, audio interface, headphones for monitoring.

There are numerous ways to record and produce podcasts. Not unlike photography, you can put together a digital recording rig for very little or you can spent thousands  or tens of thousands of dollars on expensive, high-end gear. I recommend that for your first podcast endeavor, you get the best quality gear you can comfortably afford. If you end up doing a lot of podcasting, and find a way to fund it, you’ll surely want to upgrade your equipment. And by then, you’ll have more experience to guide you.

Below I give an overview of what you’ll need and explain what I picked for the Recompiler. For a more detailed guide, check out Transom’s excellent Podcasting Basics, Part 1: Voice Recording Gear.

Computer or portable recorder too?

First, you’ll need to decide how you’ll be recording your audio: via a computer or a portable recorder. If you’ll mostly be doing field interviews or otherwise traveling a lot, a portable recorder might make sense. The downside is that you’ll still need a way to edit and publish your podcast and that requires a computer. For the Recompiler, I first thought I’d be doing a lot of field recording so I picked up a Sony PCM-M10 ($200 at the time). While I use it for other things, I haven’t ended up using it much for the podcast. Instead, I record at my desk directly into my refurbished MacMini. The good news is that you don’t need a high-end machine to record and edit podcast audio. There’s a good chance that a computer you already have available to you will be sufficient. And, audio recording and editing software is available for Windows, macOS, and Linux.

Microphone and audio interface

Being an audio medium, you’ll need to have a way to record audio. Most all modern computers have microphones built in. You can certainly start with whatever you have available to you. If you can’t afford to buy anything new, and you are ready to get started, don’t let the lack of an upgraded microphone stop you. A smart phone is also another good getting started option, especially if you have an iPhone. Most portable digital audio recorders have microphones built in as well.

However, if you do have a couple hundred bucks to spend, I recommend getting a better external microphone along with an audio interface.

External microphones generally connect via USB or XLR. Some have both. If the microphone has USB, you connect it directly to your computer with a USB cable like you would an external hard drive or non-wifi printer. If the microphone has XLR, you need an audio interface between the microphone and the computer. The microphone connects to the audio interface via an XLR cable, and the audio interface connects to the computer with a USB cable. The XLR setup is overall more complicated and more expensive, but generally provides better quality.

There are several USB microphones aimed at first-time podcasters. When I recorded In Beta, I used a refurbished Blue Yeti. I did not get the best of results. 5×5 nearly always complained about my audio quality. And, in general, I’ve had trouble with USB-based microphones, where I often have a ground-loop hum, which everyone but me can hear. As with all things, YMMV. Some folk swear by the Yeti, and other USB products from Blue. Rode also makes a USB microphone, but it’s more expensive than Blue’s offerings.

Having given up on USB microphones by the time we were planning the Recompiler, I looked for an affordable XLR solution. I settled on the Electro-Voice RE50N/D-B, a hand-held high-dynamic microphone with the Focusrite Scarlett 2i2 audio interface. My choice of microphone was based on: price (was in my budget), ability to use it in the field as well as in the “studio”, and that it would work with my chosen audio interface without extra equipment. I don’t recall how I settled on the Focusrite. I think it was a combination of a recommendation via Twitter, price, and brand (Focusrite seemed well-known and dependable). I’m happy with both choices. The Scarlett 2i2 worked right away without fuss and I get decent sound from the RE50N/D-B in a variety of environments.

If you’re just getting started, I definitely recommend the Focusrite Scarlett 2i2 ($150 new) if you want to be able to record a guest or other audio source in studio, or the Scarlett Solo ($100 new) if you just need to record from one audio source. Look on eBay for used equipment to save money.

As far as microphone, there are too many options and preferences for me to feel comfortable giving a specific recommendation. If you’re just starting out, I recommend reading through reviews on transom.org and then getting the best microphone you can comfortably afford, knowing that it won’t be the last mic you buy if you stick with podcasting.

Other accessories

Unless you’re doing field interviews exclusively, you’ll need to get something to hold your microphone. This can be a tabletop or floor stand, or a desk-mounted arm. You might also want to include a pop filter and/or a shock mount. The Transom article I first mentioned earlier gives a good overview of options for these.

For the Recompiler, I use the RODE PSA1 ($100) as a microphone mount and the simple foam microphone cover that came with the RE50N/D-B. I haven’t needed a shock mount because, I think, the RE50N/D-B is designed as a hand-held mic and doesn’t pick up a lot of vibration. I’m also careful not to bump it, the mic boom, or my desk while I’m recording.

Headphones

Don’t forget to get and use a decent pair of headphones while you’re recording and editing your podcast audio.

For the Recompiler, I picked up a pair of Sennheiser HD 202 II ($25) which are dedicated to audio recording and editing. In fact, they never leave my desk. That way I’m never scrambling to find them when it’s time to work. The Sennheisers I have aren’t amazingly awesome, but they were inexpensive and get the job done.

Whatever you pick, aim for headphones designed for studio monitoring, that are over-the-ear, do not have active noise cancellation, and do not have a built-in mic. If you do end up using headphones with a built-in mic, double-check that you are not recording audio from that mic. There’s nothing more disappointing that recording a whole segment or show only to realize you used your crappiest microphone.

If you have it in your budget, you might consider the Sony MDRV6 ($99).

Questions or comments?

Please get in touch or leave a comment below if you have questions, comments, or just want encouragement!

Next post…

Stay tuned for the next post in this series!

Making a Podcast, Step 1: Identify a Topic, Point of View, and Structure

Note: This post is second in a series where I share what I’ve learned starting and producing the Recompiler podcast. If you haven’t already, start with the introduction


Your first step in making a new podcast is to identify a topic, point of view, and structure for your podcast.

This sounds simple, but it’s helpful to think about at the beginning, to record your answers in writing, and to refer back to them often and your podcast matures.

For the Recompiler, the general topic (technology) and point of view (feminist; beginner-friendly) was already defined via Audrey’s clear vision for the written version:

The Recompiler is a feminist hacker magazine, launched in 2015. Our goal is to help people learn about technology in a fun, playful way, and highlight a diverse range of backgrounds and experiences. We’re especially interested in infrastructure: the technical and social systems we depend on. We want to share what it’s like to learn and work with technology, and teach each other to build better systems and tools.

As far as structure, early on, we decided that episodes would feature a mix of Audrey and me talking about tech news and other timely topics, along with interviews of Recompiler contributors and other “subject-matter experts.” I put “subject-matter experts” in quotes because I intentionally look for folks from a wide range of backgrounds and experiences, many of which might not be considered “experts” by mainstream tech.

We also decided that the Recompiler would have a casual, unscripted structure. We don’t currently broadcast live (although we might in the future). I do minimal editing, focusing mostly on making episodes listenable, rather than having a particular narrative arc. The order of what you hear is most likely the order in which we recorded, with inaudible or otherwise disruptive segments removed.

We aim for episodes to be about an hour long. Episodes always include two people: myself and Audrey, or myself and the person I’ve interviewed. Our target publishing frequency has changed as I’ve become more comfortable with the production process. First our goal was monthly, then twice a month, and now weekly. We don’t always meet this goal, but we’re getting better at it.

How did we make these decisions about structure? Mostly based on my constraints, both in terms of skill and time (both limited), as well as my personal preferences in terms of what I enjoy in podcasts.

To summarize, in thinking about your new podcast, you’ll need to decide:

  • general topics to focus on
  • point of view
  • structure
    • casual or scripted
    • number of hosts and guests per episode
    • target length in minutes
    • whether or not to broadcast live
    • frequency of publishing

The decisions you make regarding structure will determine the resources you need to produce a completed episode. For example, a heavily scripted show will require more audio engineering skill and editing time.

Questions or comments?

Please get in touch or leave a comment below if you have questions, comments, or just want encouragement!

Next post…

The next post in this series is: Making a Podcast, Step 2: Gather your recording equipment.

 

Making a Podcast, Intro: A Year of Producing the Recompiler

The first episode of the Recompiler podcast posted on February 4, 2016. This means I’ve had nearly a year of experience producing a podcast and in a series of posts, I’d like to share what I’ve learned.

Unlike with In Beta, a podcast I co-hosted with Kevin Purdy, I am responsible for the entire production of the Recompiler podcast: content development, booking, interviewing, audio engineering (recording and editing), publication, and promotion. With In Beta, I was just a host, responsible for developing content, performing the show, interviewing guests, and writing show notes. Staff from 5by5, the network to which In Beta belongs, did all the other audio engineering tasks and already had a publishing and marketing platform in place.

In truth, figuring out how to do the audio engineering was my biggest obstacle to creating the Recompiler podcast. It’s why there was a several months-long gap between our announcement about the podcast and our first episode.

Looking back, of course, many of the things that seemed overwhelming at the time are now routine. In the next series of posts, I share what I’ve learned. In doing so, I hope to encourage any of you who are interested in making your own podcast and give you to concrete tips for getting started.

Next up: Making a Podcast, Step 1: Identify a Topic, Point of View, and Structure.

VidyoDesktop 2.2.x on Linux with PulseAudio 4.0 (Ubuntu 13.10)

Recently I upgraded my work laptop from Xubuntu 13.04 to 13.10. The upgrade went well, except for an issue with audio output from VidyoDesktop. Every other application worked fine. Skype, audio from Flash inside both Firefox and Chromium, gmusicbrower, Rhythmbox, and the system sounds all performed as expected.

After spending a day spelunking the depths of PulseAudio, a co-worker pointed me to this bug report which links to this blog post about making Skype compatible with changes in PulseAudio 4.0.

I confirmed that manually starting Vidyo with the following command re-enabled audio:

PULSE_LATENCY_MSEC=60 VidyoDesktop

And then modified the Exec line in /etc/xdg/autostart/vidyo-vidyodesktop.desktop to this:

Exec=env PULSE_LATENCY_MSEC=60 VidyoDesktop -AutoStart

The non-autostart menu file (/usr/share/applications/vidyo-vidyodesktop.desktop) just needs the following:

Exec=env PULSE_LATENCY_MSEC=60 VidyoDesktop

We’re using version 2.2.x of the VidyoDesktop client, which I believe has been superseded and so you may not need this fix at if you use a later client version.

Leaving Google: Moving email and calendar to Zimbra

Note: This post is part of a series of posts I’m writing about migrating from Google to other service providers. Read Leaving Google: A preface to understand my motivation and goals for this project.

Aside from things like online banking and bill-pay, email and calendar are probably the most important aspects to my online life. They enable me to in touch, transact business and generally know what I am supposed to be doing when. As such, it took me a long time to find an alternative that would work for me.

The requirements and the search

Here are the requirements I defined in a calendar and email solution:

  • hosted and paid, yet affordable ($50-60 annually)
  • decent web interface
  • POP3 and IMAP access
  • ssl/tls enabled
  • ability to use own domain and to add user and domain aliases
  • multiple calendar support
  • ability to share calendars with internal and external users
  • ability to have private and public appointments
  • ability to subscribe to external calendars
  • reasonable disk space (5-10GB) and attachment quotas (>10mb)

Finding a stand-alone email provider was not an issue. Pobox (my favorite), Hushmail, Fastmail and Rackspace all provide reasonable email hosting and there are many others.

What these services lack are the robust calendaring features I need. Both Pobox and Rackspace include calendars with their email, and OwnCloud has a calendar feature. But all three are simple and lack the sharing and subscribing abilities I absolutely need.

Lack of strong calendar features continued to stall my search for Google alternatives until I realized that I was already using a great alternative at Mozilla! There we use Zimbra, a “collaboration suite” developed by VMWare that includes email and calendaring. VMWare offers open source and network editions of Zimbra. If you have sufficient courage, stamina and time to run your own mail server, you can download and install the open source edition for free (although it lacks some features of the paid version).

I have no desire to run my own mail server. Thus began the search for hosted Zimbra providers. I narrowed my list to three: ZMailCloud, MrMail, and Krypt CloudMail, from which I picked ZMailCloud.

The migration

Once my account was setup, the migration process was fairly straight-forward:

  • Update MX records for my chosen domain.
  • Start forwarding Gmail to new email addresses.
  • Add Gmail address as external account in Zimbra via IMAP and start copying messages.
  • Export main Google calendar and import into calendar called “Google” on Zimbra. Start copying relevant appointments to new main calendar.
  • Begin the tedious process of updating email address everywhere.

I had a couple of choices when migrating all of my email messages:

  • Use an email client like Thunderbird to copy via IMAP
  • Add Gmail address as an external account via POP3. The disadvantage to this approach is that you get zero folder information, which is only a problem if you were using folders/labels in Gmail.
  • Not copy messages at all and start with a clean slate!

Also, you might be wondering why I didn’t simply import my Google calendar into my new main calendar. I actually did this at first. Then I realized that all of the appointments were imported with the visibility set to public. This won’t work for me because I want to be able to share my calendar with the public, allowing them to see the details for some appointments (like office hours and public meetings) but not for others.

Progress so far

The migration, begun a couple of weeks ago, continues. Each time I log in to an account I check the email address and update it if need be. I update mailing list subscriptions as I read messages from those lists, and those hosted on Google groups are the most tedious to update.

I also haven’t figured out how to tell everyone who might need to know that I have a new email address. I can’t bring myself to spam my entire address book (and there are probably folks in it I don’t actually want to engage with). So, for the time being, I’m just replying from the new address and letting people or their email clients update my record on their own.

Other solutions?

I’m curious about other possible solutions. For those of you who have switched away from Google mail and calendar, or were never there in the first place, what do you use? Let me know in the comments!

 

 

How to install BitlBee (IRC to chat and Twitter gateway) on Ubuntu

What is BitlBee?

bitlbee-logo

BitlBee enables you to connect to chat networks and Twitter via an IRC client and interact with those chat networks in the same way you interact with IRC.

Why would you want to do this? Aside from being neat, being able to connect to chat and twitter with your IRC client means there are fewer programs you have to run and keep track of and it enables you to use the keyboard to issue commands instead of the GUI.

Installation on Ubuntu

This post explains how to build BitlBee from source on the most recent Ubuntu LTS (12.04 Precise). There are packages for BitlBee, but they aren’t up to date.

Note: These instructions are for a single-user setup of BitlBee. If you are installing a server for multiple users, especially ones you don’t know well, please read the documentation to be sure you understand what you are doing and are selecting the most secure options.

Dependencies

You’ll need to make sure the following packages are installed on your system: build-essential, libglib2.0-dev. Additionally, you’ll need an ssl library and I recommend libgnutls-dev (over openssl, which can be problematic). And if you want to support off-the-record chat, you’ll need libotr2-dev.

You can install all of those with:

sudo apt-get install build-essential libglib2.0-dev libgnutls-dev libotr2-dev

Download, configure, and make source and install

wget http://get.bitlbee.org/src/bitlbee-3.2.tar.gz
tar -xzvf bitlbee-3.2.tar.gz
cd bitlbee-3.2
./configure --otr=1 --msn=1 --jabber=1 --oscar=1 --twitter=1 --yahoo=1 --ssl=gnutls --etcdir=/etc/bitlbee
make
sudo make install

The configure included above specify the following:

  • inclusion of msn, jabber, oscar (AOL), yahoo, and twitter protocols
  • enable OTR (off-the-record messaging)
  • gnutls as the ssl library
  • location of configuration directory as /etc/bitlbee

Configure BitlBee

Next you’ll need to configure Bitlbee for use.

First, create and then edit the sample conf file:

sudo make install-etc
sudo vim /etc/bitlbee/bitlbee.conf

Here are the important options to set:

  • RunMode: How the bitlbee server should run. Options include: Inetd, Daemon, ForkDaemon.
  • User: The user that bitlbee server should run as. bitlbee makes sense here.
  • DaemonInterface: Which network interface to use. The default should be fine.
  • DaemonPort: Which port to use. The default should be fine unless you’re already using it for IRC or ZNC (bouncer).
  • AuthMode: I recommend setting this to Open and then to Registered after you’ve registered yourself.
  • AuthPassword: Needed to login to closed systems. Generate a hashed password with bitlbee -x hash .
  • OperPassword: Unlocks operator commands. Generate a hashed password (see previous bullet).
  • ConfigDir: Make sure this is the same thing specific in the configure option. In this example, it’s /etc/bitlbee.

Here are the example conf directives:

RunMode = ForkDaemon
User = bitlbee
DaemonInterface = 0.0.0.0
DaemonPort = 6667
AuthMode = Open
AuthPassword = md5:SECRET_HASH
OperPassword = md5:SECRET_HASH
ConfigDir = /etc/bitlbee

Add bitlbee user

Now you need to create that system user and make sure it can read the conf file:

sudo adduser --system bitlbee
sudo chmod -R +r /etc/bitlbee

Start the server

Now run the server:

sudo bitlbee -c /etc/bitlbee/bitlbee.conf

Usage

Connect with your IRC client

Open your IRC client and add the bitlbee server just as you would any IRC server. Here’s what it looks like in X-Chat:

mybitlbee server in xchat
mybitlbee server in xchat

Server password will be whatever you put for AuthPassword in your bitlbee.conf. It doesn’t matter what you have for nickname, user name or real name. These will be used when you register with bitlbee.

Register your user

register <password>

You should then see

<@root> Account successfully created

On subsequent sign ins you’ll need to identify just like you do with NickServ:

identify <password>

Now that you’ve registered your user, it’s a good idea to change AuthMode to Registered in your bitlbee.conf.

Setup your accounts

When you first start BitlBee, you won’t have any chat or Twitter accounts so you’ll need to set them up.

<@christiek> account list
<@root> No accounts known. Use `account add' to add one.

So let’s setup gtalk:

<@christiek> account add jabber myemail@gmail.com
<@root> Account successfully added with tag gtalk
<@root> You can now use the /OPER command to enter the password
<@root> Alternatively, enable OAuth if the account supports it: account gtalk set oauth on
<@christiek> account gtalk set oauth on
<@root> oauth = `on'

Now the gtalk account is configured, but it isn’t turned on:

<@christiek> account list
<@root>  0 (gtalk): jabber, christiekoehler@gmail.com
<@root> End of account list

So we’ll turn it on and follow the prompts to complete the oauth authentication:

<@christiek> account gtalk on
<@root> jabber - Logging in: Starting OAuth authentication
<jabber_oauth> Open this URL in your browser to authenticate: URL
<jabber_oauth> Respond to this message with the returned authorization token.
<@christiek>TOKEN

Visit the BitlBee wiki for instructions on how to setup other chat networks or Twitter.

Time to chat!

Once you’ve configured a chat account and are connected, you’ll see your contacts listed as you would regular IRC users.

To initiate a chat you can use IRC commands:

/query robert.mith