Insecure Futures: Privacy, Security and Connected Devices (Weds 1 Nov, 6pm): RSVP here.
The event is part of a series of panels curated by Machines Room and Kickstarter. Sarah and I will be doing this as a "fireside chat." Should be thought-provoking -- these are some chewy topics, and Sarah is an expert. Her consultancy researches trust, policy and design for clients with Google and Facebook with output both practical and speculative.
We've each been asked to spend 5-10 minutes at the beginning of the session to set out our stand, so to speak. So this is my current draft on what I'm going to say. Comments welcome; I'll evolve it some before speaking.
On IoT, security, and privacy. But security first:
Let me say a few words about security first. Then privacy.
And really, because we're talking about the Internet of Things, we're talking about the security of a device in people's homes and in businesses, what we're talking about is the security of data and other devices on the trusted networks in those places.
With my investor hat on, a startup that doesn't take security seriously is obviously a problem because it's saving up problems down the road -- it will be harder to acquire, and it has the potential of being part of something catastrophic.
For me, one tell around this - a technology red flag - is when companies build their own stack themselves for secure connection of devices to user accounts (called provisioning), or for performing over-the-air (OTA) updates. These two are bellwethers: if something isn't right here, it's likely that security hasn't been considered elsewhere in the stack.
It's easy to convince yourself, as a startup, that there is no solution out there that meets your needs for provisioning and updates. But over the last 12 months, the technology stack for connected devices has matured. And honestly, these stacks come with features that you will never get round to building yourself. So it's worth looking for existing solutions.
resin is an interesting example of a useful stack. One of the things resin makes easy is over-the-air updates for device software. But because some of their users run this software for drones, they also include a feature that allows the drone to postpone the software update until it has safely landed. That's a useful feature. Let's say you're building a cash register: it would be great to have a feature where it can postpone updates till after the lunch rush is over. That's the same thing. But will you get round to building it yourself? Probably not.
So building your own stack is hard to get right, and more importantly it's expensive to keep up to date. Over months, as the technology landscape evolves, a resource constrained startup may find itself lagging. And that's where security problems emerge.
Building your own artisan stack feels like an expensive indulgence in most cases. The line to keep in mind is Werner Vogal's maxim - Vogal the CTO of Amazon - his maxim of
no undifferentiated heavy lifting. That is, don't put significant engineering resource into stuff that isn't your core business.
But security isn't just technology. It's design.
It's what you encourage users to do. A friend of mine in San Francisco had some smart lighting and smart plugs some years ago. It has this great feature where if you're on the same wi-fi network, it automatically associates the devices with your app so you can control them. And then, even when you're not on the network, you can turn the lights on and off. Handy.
So some months after staying with my friend, I discover - from London, while demoing the app - that I can turn on the lights in his front room. I discover this because he texts me, after I've been doing this for some weeks, to ask if it's me turning on and off his lights at 4am. Yes, yes it is.
Of course I reckon with this power I can possibly start a fire. Lights on and off as quick as possible. No security stack is going to help. But thoughtful design can.
The tension for startups is that design for thoughtful design, and therefore for good security requires you to know what your product and service is doing, but in the early stages you may have to change the product quite a few times to get it right.
Now you think I'm going to say that this is a difficult decision, blah blah blah, that startups should consider security early on, despite this.
I'm not going to say that. I'm going to say that maybe a startup should ignore security, just a little bit.
What I mean is: if I meet a startup who has spent ages on its security, pre getting some real customer traction, I am going to be nervous that they have over-engineered the product, and won't be able to iterate. The product will be too brittle or too rigid to wiggle and iterate and achieve fit.
So it's a balance.
One of the reasons that security matters is because it can lead to privacy being violated. Or rather, let me clarify:
Poor security can mean a startup's customer gives up privacy in an unintended way. That's going to damage sales.
But what's more of a preoccupation to me is when privacy is reduced in an intended way. You see this a lot when a startup has figured out how to make a business work by being not quite straight-up about what they're doing with the data they're collecting.
You would be surprised how many companies like these I encounter. Or maybe you wouldn't be.
I think it should be a point of greater social concern that consumers are asked to consent to data retention and usage when even the people collecting the data don't know what it may be used for down the line. Object recognition and facial recognition is getting really good -- but it wasn't great or well known at the point I subscribed to most of the services I now trust with my data. Can it really be said I consented to this? So we need a better way to discuss this, in society.
With a more commercial hat on I subscribe to the view that, in most cases, big data is not an asset, it is a liability. If it's not necessary for the business model, then it's an expense to keep it secure. So don't keep incur that expense. For example, you don't need to keep credit card numbers to take payments. OIutsource it. You don't need to move video to the cloud to data to do image recognition -- we have machine learning at the edge for that now.
But mainly, I think about this: is it skeevy?
The tide has turned on privacy, just as it did for sustainability. For ages being sustainable was something companies did just to feel good about themselves. Now it's both consumer expectation and good business.
With privacy? For B2B startups I feel that being privacy conscious is becoming a differentiator and should be advertised as such. No potential business customer will want to be associated with the risk of leaks, being hacked, or potential damage to the brand from revealed "skeevy" behaviour.
It's not a negative thing. There's opportunity here too.
I want to end with an example which is Hoxton Analytics, a company I had the privilege of working with at the R/GA IoT accelerator I ran earlier this year. By the way, we're running another one, and applications close on 7 December, just a few weeks from now. We can talk about that afterwards.
Hoxton Analytics supply, for their clients, pedestrian footfall intelligence. They count the number of people walking in and out of your shop.
Historically this has been done with infra-red beam interruption. Well, that can't track groups or whether people are going in or out.
So instead you can do it by tracking smartphone signatures. Information-rich but not everyone has their Bluetooth or wi-fi turned on.
So you can really amp it up and monitor footfall with cameras doing facial recognition: that doesn't fly in Europe, it's personally identifiable information. Fine elsewhere in the world though.
Hoxton takes a different approach. They have cameras right down low on the floor, and they use machine learning - on the device - to recognise shoes.
It's crazy accurate. 95% accurate. It can also count group sizes, and whether people are going in or out. So it can do capacity.
It also doesn't store personally identifiable information so it's good in Europe.
But get this. Because they've built this solution, it means they can also use it in public places. So you can point the camera out of the window and see how many people are walking past, versus how many people are walking in. This is the holy grail, like a conversion funnel, like Google Analytics, but for physical retail. And they've got there by considering privacy not as a product constraint, but as a product feature.
That's where my head's at regarding security and privacy. I'm going to chew on these thoughts a bunch before the discussion with Sarah, and I'd welcome your thoughts -- either on my views as laid out above, or on questions to ask her.
I don't know if there are any tickets left but if there are do come along and if you're already signed up, then I look forward to seeing you on Wednesday night.
The problem is that you launch a thing or have some big news and those pesky journos won't cover it.
Here's one approach:
If you're a pro, or if you have a marketing team, talking to journalists like this is second nature. But for founders who are just getting going - and for rank amateurs like me - it can be hard to know where to start.
So one way is to use what I call a Tick-Tock List.
(I only call it this in my head. Nobody else says this. What I mean is you should email people on the regular, like clockwork.)
How to run a Tick-Tock List:
What should be in each email:
The email should be short and easy to read. Use bullets.
By achievement I mean something that is outward-facing that is actually interesting. Concrete. If nothing happened, say nothing happened -- and why.
After you've done this a few times, and if you've got something genuinely worthy of a story, you might want to say - before your three things, in bold - that you've got a launch/event/newsworthy thing coming up in a week or two, and you're hunting for coverage. Offer to chat about it.
You might find - and this is the goal - that somebody on your list, somebody who has never replied before, happens to receive the email at the right time and they have the right-shaped hole in their slate, and so they get in touch to learn more and hopefully do a story.
When you say what's coming up, don't be cagey or fake-enticing. Your email recipients aren't marks, they don't owe you anything, these are humans, one day maybe you might be friends. Be open enough for them to make a decision. But likewise don't put them in the difficult position of being told a detail via email that you really want to keep secret.
What is newsworthy? Think: is this so interesting that if you heard it about someone else you would want to tell your non-bubble friends; have you said it in the right way to be easily understood, and provided the right words for others to do the same; can it further the narrative of the journalist.
(Aside. I feel that every publication has a worldview that it is continuously pushing. It could be something like "technology is building the beautiful future we imagined when we were kids" or it could be "this thing is niche right now but one day it will be mainstream and momentum is growing." Find and provide an angle to allow journalists to use your story to develop and argue this worldview with their readers.)
The hard bit:
The hard bit: continue with the Tick-Tock List.
Let's see, what else. Did I already say this isn't a newsletter? This isn't a newsletter - and there are many and I subscribe to many and they are brilliant - so you should also one of those (and a blog, and a twitter, and...). But this is more intimate. An actual email. Um. Be respectful. Your goals are
I've shared the Tick-Tock List pattern with a few companies over the years. I'm actually a bit nervous to share it here because it's so trivial. But I've had a good experience of this personally, and reports of good effects, so I figured I'd write it up.
Please let me know if it works for you. (And if you're on the other side of the fence, I'm curious about your views too.)
Bonus link: Mike Butcher's article/rant The Press Release Is Dead - Use This Instead is fantastic. Check out the list of questions that he needs answered, as Editor-at-large of TechCrunch Europe, to get to grips with a possible story.
Early in 2017 I ran an accelerator in London investing in Internet of Things startups, and it went so well that we're doing it again. Tell your friends.
Upcoming events: see the bottom of this post for some places we can meet over the next month.
The program in 5 bullets:
If you'd like to see an example of the visual identity work, I love the look and messaging of Flock's website and app (alum 2017). Flock sells pay-as-you-fly drone insurance, using a proprietary and automatic risk algorithm, and is now - impressively - partnered with Allianz for underwriting.
For me, Internet of Things means digital reaching into the real world. My favourite startups use now-mature IoT tech (whether hardware or software) to do something that wasn't possible before, such as insanely accurate pedestrian football by using artificial intelligence to count shoes, or halving food waste in commercial kitchens. Both of those are companies in the 2017 cohort. My favourite IoT startups don't say IoT on their homepage.
Here are the 2017 alumni. I'm delighted with how the cohort is going. (Some are now based here at R/GA London where we offer below-market desks to portfolio companies.)
The website: R/GA IoT Venture Studio UK with info about the upcoming program.
Last year only 3 of the 9 companies in the program had women founders. That tells me we didn't do a good enough job.
This year, I'd like to get info out especially to women and people of colour. If that describes startups you know, or you know groups and networks that are representative, I would appreciate your help to spread the word. Please share a link to this post.
There are a number of ways we can meet/talk.
Applying to the program is easy: use the form here.
We're accepting applications until 7 December 2017.
We're also always looking for more sponsors. Companies like Snapchat, Westfield Labs, and Intel like working with R/GA Ventures because they get visibility in the emerging tech ecosystem, and early access to startups which are ready to partner. Let me know if you'd like to chat more.
Voice systems are always listening, but it's expensive (and invasive) to analyse everything picked up by the microphone. Hence wake-up words, which keep the rest of the system switched off until heard, and are - in theory - cheap to detect.
How the "Hey Siri" wake-up words work, by Apple's machine learning team.
The wake-up words run as a tiny brain. In the following, DNN stands for
Deep Neural Network.
To avoid running the main processor all day just to listen for the trigger phrase, the iPhone's Always On Processor (AOP) (a small, low-power auxiliary processor, that is, the embedded Motion Coprocessor) has access to the microphone signal (on 6S and later). We use a small proportion of the AOP's limited processing power to run a detector with a small version of the acoustic model (DNN). When the score exceeds a threshold the motion coprocessor wakes up the main processor, which analyzes the signal using a larger DNN.
Compiled tiny brains. High accuracy, low power recognisers, super focused single feature fetishisers.
A.I. on dedicated silicon is getting cheeeeeap.
Give it a few years, and I reckon voice-on-a-chip and hand-gesture-sensitive-lensless-camera-on-a-chip and make-any-surface-touch-sensitive-on-a-chip and make-use-of-nearby-watches-and-headphones-on-a-chip will be so accurate, so power efficient, and so cheap that they will undercut the cost of physical interface components like buttons and screens -- and therefore be used instead. For everything from kitchen scales to door locks. Which will change how we interact with products and what they look like.
This finding represents the first convincing demonstration for the use of the starry sky for orientation in insects and provides the first documented use of the Milky Way for orientation in the animal kingdom.
What factories looked like in the age of steam:
The mechanical power came from a single massive steam engine, which turned a central steel drive shaft that ran along the length of the factory. Sometimes it would run outside and into a second building.
Subsidiary shafts, connected via belts and gears, drove hammers, punches, presses and looms. The belts could even transfer power vertically through a hole in the ceiling to a second or even third floor.
And then electricity:
But electric motors could do much more. Electricity allowed power to be delivered exactly where and when it was needed.
Small steam engines were hopelessly inefficient but small electric motors worked just fine. So a factory could contain several smaller motors, each driving a small drive shaft.
Electricity changed factory architecture:
A factory powered by steam needed to be sturdy enough to carry huge steel drive shafts. One powered by electricity could be light and airy.
Steam-powered factories had to be arranged on the logic of the driveshaft. Electricity meant you could organise factories on the logic of a production line.
Old factories were dark and dense, packed around the shafts. New factories could spread out, with wings and windows allowing natural light and air.
The fractional horsepower motor took the domesticated factory drive shaft right into the home:
Electrification began in cities around 1915 and with electrification so too came the potential market for washing machines, refrigerators, vacuum cleaners and a host of other commercial appliances. ... By 1920, over 500,000 fractional horse-power motors were powering washers and other appliances in America.
Back in 2012, I wrote about fractional artificial intelligence. Here's a talk on the same topic from 2010. Watching this now it's like watching somebody stumbling around in the dark, but I think this is what's happening today.
Computers can be trained to see. But they don't necessarily fixate on the features humans see.
Adversarial Machine Learning is a technique to change an image to be recognised as something else, without looking any different to humans.
For example: a panda that - with the right fuzz of pixels added to it - looks to the computer 99.3% like a gibbon.
A hack: adversarial stop signs.
the team was able to create a stop sign that just looks splotchy or faded to human eyes but that was consistently classified by a computer vision system as a Speed Limit 45 sign.
Examples are given.
Ontology is the philosophical study of existence. Object-oriented ontology:
puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally -- plumbers, cotton, bonobos, DVD players, and sandstone, for example.
Things from their own perspective.
A desk telephone, from its own perspective, is constructed to entice (a curve of a handle, buttons that want to be pushed) to feed on sound. To be nourished by sound. And with that consumed energy, to reach out across the world and touch - out of an infinity of destinations and through the tangle - one other. And to breath in relief at this connection, a sigh: another voice.
The Ethics of Mars Exploration, an interview with Lucianne Walkowicz:
it remains a fact that Mars is a place unto its own that has its own history, and what respect do we owe to that history? What rights does that history have?
Which makes me ask this:
Yes I believe there's a human imperative to go to Mars; yes I believe it has to be done in an inclusive way; yes space mustn't be about resource exploitation, a cosmic Gestell; yes potential life on Mars must be preserved.
But also, what Walkowicz said, the land, the land, the land.
I hike, and the land has an intrinsic right to be itself. But I also believe in the human experience of the land, that this is a component of meaning: so, paths? When you walk the trails of the American south west, you come to understand that the trail-makers are poets, giving the land a voice to sing through human experience: effort, surprise, endurance, revelation, breathlessness.
So there should be trails on Mars too.
Which makes me think this:
Who is working to understand this interplay of the subjectivity of the land, and the human gaze, right now? Not necessarily on Mars.
Landscape artists - landscape photographers - do this well.
And that's a process that, for Mars, could start today.
There is Mars exploration via rover right now. The rovers, of course, have cameras. Do they have landscape photographers on the team? Are those artists given reign to look, be, and create?
Why Hasn’t David Hockney Been Given The Keys To The Mars Rover Yet.
A list of interstellar radio messages. That is, ones we've transmitted, not ones we've received.
The first one, from 1962, in Morse code:
MIR LENIN SSSR Sent to Venus.
A more recent one, A Simple Response to an Elemental Message, was transmitted in October 2016 and comprised 3,755 crowdsourced responses to the question
How will our present, environmental interactions shape the future? It was transmitted towards Polaris and will take 434 years to arrive. (Then another 434 years to hear back.)
The Golden Record is not a radio transmission but a physical item, copies of which were placed on Voyagers 1 and 2 in 1977, includes pictures, sounds, music, and greetings in 55 languages including, in Amoy, spoken in southern China, these words:
Friends of space, how are you all? Have you eaten yet? Come visit us if you have time.
Which I hope desperately isn't misinterpreted as offering humanity up for lunch.
Voyager 1 will make a flyby of a star in 40,000 years. Star AC +79 3888 is 17.6 lightyears away, so the earliest we will receive a radio message back is in 40,017.6 years. We should remember to listen out for that. Year 42,034. June.
Over the weekend I heard it asked:
Who is keeping an archive of all the messages we send into space, and how will that archive be maintained? We won't receive an answer from the stars, if any, for hundreds or maybe tens of thousands of years.
If, when, we receive a reply saying
YES then how will we know what it's a YES about?
I spent the weekend at Kickstarter HQ in Brooklyn for PWL Camp 2017 -- a 48 hour, 200 person unconference where
the agenda is created by the attendees at the beginning of the meeting. Anyone who wants to initiate a discussion on a topic can claim a time and a space.
Tons of great conversations. A very open, generous, and talented crowd. My notebook is full but mostly incomprehensible. The above are four things that came up. I'm grateful for having been invited.
My Dearest Droogs,
Let's have a hardware-ish coffee morning! Soon!
Thursday 19 October, 9.30am for a couple of hours, at the Book Club, 100 Leonard St.
I'll be back from my travels, moderately jetlagged, and in no state to conduct linear conversations. So it will be especially important to (a) talk to everyone else who comes (they're always really friendly); and, (b) poke me in the ribs if you see me nodding off.
Usual rules: we don't do intros; everyone talks to everyone else; you order coffee from the counter and please don't forget to pay otherwise the staff get confused; bring a prototype if you have one; actually working with hardware IS NOT A requirement, you just have to be curious. Here's what happened last time.
Might be 5 people, might be 25. If you're a startup and want to ask me about the new R/GA IoT Venture Studio, I am happy to chat.
(Also posted to the coffee morning announce list to which you should subscribe for future updates.)
This is an amazing long essay, well illustrated, about someone who builds an heat sensitive camera. It is peppered with poetic descriptions of what the camera sees.
the air itself glowing
And, looking outside,
the vegetation is not as reflective, so you get the "blackness of space" sky with regular-ish landscapes. It's almost like being on the airless, derelict Earth - preserved under the void after whatever disaster befell it.
I'm Google by Dina Kelberman.
an ongoing tumblr blog in which batches of images and videos that I cull from the internet are compiled into a long stream-of-consciousness. The batches move seamlessly from one subject to the next based on similarities in form, composition, color, and theme. This results visually in a colorful grid that slowly changes as the viewer scrolls through it. Images of houses being demolished transition into images of buildings on fire, to forest fires, to billowing smoke, to geysers, to bursting fire hydrants, to fire hoses, to spools of thread.
Does what it says on the tin.
Here's a system using artificial intelligence to generate human faces.
Worth it for:
illegalto see what the system does when it's asked to generate faces from inputs outside the regular range. The faces are weird patchworks, a computer-native cubism
See also: WaveNet, which makes realistic speech audio also using A.I. It's incredibly realistic, but search for
babbling and listen to what the system produces in the absence of any text to process. It's a mess of clicks, hums, and wet mouth noises -- horribly human but with an absence of intelligence. Uncanny.
Imperfectly real. (Not quite sure when the real got relegated.)
Two possibilities for this shift:
Legitimacy in the age of conversation is not communicated via iconic images. I've covered legitimacy previously, in the context of the media:
"People trust us because we've spent years developing a relationship with them. We have been scrutinized and found not evil. Our legitimacy comes from honesty, not from cultural signals or institutions."
Second possibility is that this is the age of photoshop and everything mediated is manipulated. Hard to build trust.
It is also the age of marketing where "greed is good" and "might is right" have been joined by another tyranny: truth is what you can get people to believe.
So there's space for an approach that doesn't (appear to) dress up and doesn't (appear to) convince.
See also: the Instagram trend called the plandid,
the planned candid -- where you look totally natural in your posing, like you've been caught in the act and just so happen to look triple-digit-Insta-likes amazing.
Examples are given.
I grew up in the waning years of the Cold War, those happy days where apocalypse was total but distant, rather than continuous, partial, and immediate. The word "DEFCON" is engraved on my soul. Turns out each of the five levels has a code word associated with it too.
From DEFCON on Wikipedia.
The Triumphant Rise of the Shitpic, the patina that comes from cycles of screencapping and upload-compression as a picture is shared and shared again,
the first non-numeric indicator of viral dissemination.
Wonder how long it'll take for Domino's to adopt this.
Wonder which version of the iPhone will have a computational photography mode to create pre-distressed selfies, for that already-shared look.
See also: this video of the LaserSharp Denim HD Abrasion System which creates identical pre-distressed jeans.
See also: Gudak, the disposable camera app. You get only 24 photos at a time; a roll of film takes three days to develop; the photos are grainy and the light that leaks over them is the colour of summer days that never ended, when you were still young and you still laughed and your life stretched out ahead of you and you could still be anything.
Fun app. Five stars.
This oral history of the CGI visual effects in Terminator 2 is an awesome long read. So much of the use of computers was new, then.
Also awesome for this photo of Robert Patrick, almost naked, covered in a Sharpie grid, being filmed for motion capture.
Robert Patrick played the T-1000, the liquid metal morphing Terminator from the future.
Also, also awesome for the terminology of the engineers and artists:
So, we had what we called RP1 through to RP5. Robert Patrick - RP - that was the actual naming convention.
RP1 is the blob, an amorphous blob. RP2 is a humanoid smooth shape kinda like Silver Surfer. RP3 is a soft, sandblasted guy in a police uniform made out of metal, and RP4 is the sharp detail of the metallic liquid metal police guy, and then RP5 is live action.
Robert Patrick, the actor, the actual dude, gets relegated from his own name.
RP5. Fade Out.