I recently acquired a Spark Core, and (after some minor hassle) got it connected to my network and walked through the examples. Neat product, though to some extent it suffers from problems of “trying to make it easier for the noobs, with a failure mode of making it harder for everybody” and “everybody wants to write features, nobody wants to write documentation”.
Past the jump, I’ll explain how to get around the 32-character wireless passphrase limit, and how to use cloud compilation but with a real Makefile, local copies of your source code and your favorite text editor.
Getting Connected
The Case Against Smart Config
One of the main reasons to get the Spark Core in the first place is that it has built-in 802.11b/g connectivity and a native TCP/IP stack, courtesy of the TI SimpleLink CC3000 network processor. (This is a single chip incorporating an 802.11b/g PHY, a microprocessor and a network stack.) For this to do any good, you obviously need to get the Spark Core connected to your network infrastructure in the first place.
To do this, the Spark Core folks have leveraged TI’s “Smart Config” feature, which uses broadcast packets from an Android or Apple IOS device to tell the CC3000 the SSID and crypto keys for first-time setup.
Those of you who have used computers before are already suspicious, because it is a universal law that any software feature with the word “smart” in the name inevitably does something dumb. To be fair to the TI folks, they’ve done some clever work around an extremely difficult problem, but sure enough, there are some good reasons to not use Smart Config on the Spark Core:
- It doesn’t work with wireless pass phrases longer than 31 characters. (Worse, the “Spark Core” Android app enforces the wrong limit, allowing 32-character pass phrases.)
- The credentials are transmitted over the air encrypted with AES (since there wouldn’t be a length limit, else). That’s good. But the Spark Core app never asks for a key, and there’s no “here is your key!” paper unique to each Core. Thus, using the Spark Core app to do Smart Config means we’re transmitting the keys to the kingdom encrypted using an AES key hardwired into the app (and the default core firmware). That’s not very safe.
- You have to have an Android or Apple IOS device, and be willing and able to connect it to the wireless network you want the Spark Core to use.
Config via USB: Prerequisites
(Note: The specific instructions below all assume a UNIX-like system, and some may be specific to recent Ubuntu Linux systems. The general principles involved apply equally to other systems such as Mac or Windows, but the exact text of the commands involved may or may not. Windows users in particular will need to install a special device driver in order to see a “listening mode” Core as a serial device. See the Spark Docs for more details.)
Fortunately, there is an alternative to Smart Config: Configuration over USB, using the command-line tools. The tools in question are based on node.js, and are quite handy to have around even if Smart Config works for you. On Ubuntu 14.04.1, the installation process looks something like this:
sudo apt-get remove node modemmanager sudo add-apt-repository ppa:chris-lea/node.js sudo apt-get update sudo apt-get install nodejs sudo npm install -g spark-cli
Most of the above I got from user Hypnopompia‘s post on the Spark Core fora entitled “How to install the spark toolchain in Ubuntu 14.04“.
We’re removing the “node” package because it’s an unrelated thing having to do with packet radio that also provides a command called “node”; having it installed may break node.js stuff. We’re uninstalling modemmanager because it seizes and sends junk to any serial port it sees, and also it’s 2014 and we have broadband now and also fire and the wheel and just please tell me that nobody has a reason to want to connect a Spark Core and a modem to the same box?
Even though the node.js you get by default in recent Ubuntu releases should probably be OK, Ubuntu updates it about once every galactic rotation. So, I’m echoing Hypnopompia’s suggestion to use the alternate repository for that.
At this point, you should be able to run the following command (as a normal, non-root user):
spark login
That will cache the credentials you need for talking to the Spark cloud, and to your cores, in the local file ~/.spark/spark.config.json
. Remember to spark logout
to revoke those credentials when you’re done, if you’re bothered about that.
Now, how does that help us configure a Spark Core that can’t connect to the network yet? When the Spark Core is in “Listening Mode” (LED flashing blue), it appears on the USB bus as a serial device. When a Spark Core on your USB bus enters listening mode, you should see dmesg
output along the lines of:
usb 2-1.1: new full-speed USB device number 8 using ehci-pci usb 2-1.1: New USB device found, idVendor=1d50, idProduct=607d usb 2-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-1.1: Product: Spark Core with WiFi usb 2-1.1: Manufacturer: Spark Devices usb 2-1.1: SerialNumber: 0xN0TY0UR5GTF0PL5 cdc_acm 2-1.1:1.0: This device cannot do calls on its own. It is not a modem. cdc_acm 2-1.1:1.0: ttyACM0: USB ACM device
The spark CLI tools know how to talk to that serial port to perform first-time setup on the Core.
Connect Using “spark setup”
The easy way to perform the configuration is:
- Make sure your Core is in listening mode (with the big multi-color LED directly between the USB jack and the CC3000 (big silver box) flashing dark blue. If it isn’t, hold down the mode button until it is. (If it won’t go into listening mode, see the Troubleshooting page — specifically, the item (3) for flashing yellow. You may have to do a factory reset, and if it still doesn’t work, re-flash the default firmware via DFU.)
- Run the following command:
spark setup
- Follow the prompts.
- If you make a typo or it won’t connect, hold down the mode button to go back to listening mode and try again from (2).
In my experience, this process would reliably make my Core connect to my wireless network (WPA2 Personal, AES, with a pass phrase much longer than 31 characters). You can tell it’s connected when the multi-color LED is “breathing” cyan — that is, emitting cyan light and ramping the luminance up and down in a triangle wave at about 0.5Hz.
However, the last step where setup “claims” the Core (associates that core with my account) only worked about one time in four. If you have a Core that is connected but unclaimed, you can fix that problem with a command like the following:
spark core add 0123456789abcdef01234567
(replace the 24-digit hex number with your own Core’s ID, which should have been shown during the setup command).
Alternative: Connect Using a Serial Terminal
If the spark setup command isn’t working for you (or if you just want to experiment), you can also perform first-time setup using a serial terminal. On Ubuntu, the Core shows up as a serial device like /dev/ttyACM0 which wants to talk to you at 9600 baud, 8 bits, no parity, 1 stop bit, and no flow control. Any reasonable serial terminal application (like minicom or GNU screen) should work fine.
There are two commands: ‘i’ (for identify) and ‘w’ (for wireless setup). Note that these are single characters; you need not send a newline after the command.
The ‘i’ command will show you the unique hex ID of your Core. Make a note of this (or copy it to the clipboard) as you’ll need it later.
The ‘w’ command prompts you for your wireless network SSID, encryption mode and passphrase. (The core will send text prompts over the serial connection for each one. You will need to send a newline after each entry here.)
After you have used the ‘w’ command (assuming you entered the credentials correctly), your Core should reset and connect to the wireless network. (Look for the multi-color LED “breathing” cyan.) If you need to try again, hold down the mode button to go back into listening mode. (Note that the USB serial device will disappear when the Core resets, and won’t re-appear unless you re-enter listening mode. You may need to re-start your serial terminal application.)
Once the Core is connected to the network, you’ll have to “claim” it using the spark core add <id>
command as described above.
Cloud Compilation, Local Files, CLI
Spark gives you a very slick “my first IDE” in the form of the so-called “Sparkulator” (aka Cloud IDE, aka Build Page). Credit where it’s due: the Sparkulator is slick, and works way better than it has any right to. It’s a great learning tool, in that it lets you write and test code right away, with a few clicks, and without having to install any special software or use a specific platform.
But — and you knew there’d be a “but” — it isn’t a very good fit with the way I like to write software. My biggest gripe is that I have a text editor I prefer, and the one built in to the Sparkulator isn’t it. My second biggest gripe is that if I write a program, I want a copy of that program on a local disk that I control, with backups I can put my hands on. (You can get that with the Sparkulator, but it involves having to cut-and-paste your sources — one file at a time — into something that ends up in a local file.)
I know I can install the toolchain and all the libraries locally, run my own copy of the “cloud” and thus not depend on any outside resource. That’s probably what I’ll do, eventually.
However, in the meantime, I wanted something a little more refined than the Sparkulator but without having to do a whole lot of work. Looking at the node.js spark CLI tool, it occurred to me that I could write a Makefile and a little bit of glue and get most of what I wanted with not a whole lot of effort.
Here is my example: spark-blink.tar.gz (4.9KB gzipped tarball)
See the README inside for details. The Makefile and the “getid” Perl script are heavily commented.
The example application (application.cpp
) may also be of some mild interest, as it demonstrates an alternative way to do a simple blinking LED without having to call delay()
or otherwise block in the loop()
function. (This is important because the Core firmware is single threaded. When loop()
is running, the network stuff isn’t.)