The Information Arms Race

August 11, 2013

This GCHQ / NSA / Snowden thing is confusing. Part of me is shocked and horrified. Another part of me is jadedly unsurprised. (Is “jadedly” a word?) I think I already assumed that they know everything they want to know. As Scott McNealy maybe did or didn’t say “you have no privacy, get used to it”.

Today a tweet from @Orbitingfrog alerted me to more disturbing news ; encrypted email company Lavabit have shut themselves down in protest over a mysterious government investigation that they are even forbidden from talking about; and Silent Circle, founded by Phil Zimmerman – the inventor of Pretty Good Privacy (PGP) – have pre-emptively shut down  their secure email service and deleted content so that they cannot be subject to the same pressure. Some years back Zimmerman was under criminal investigation for offering the PGP code worldwide, which the US government claimed breached laws against the export of munitions. Zimmerman printed the code in a hardback book and exported that instead.

Although the strong-arm stuff is scary, it kinda makes sense. The Lavabit episode seems to confirm that even the NSA cannot crack RSA-grade encrypted material. Instead of quietly snooping and leaving the public docile, they have no choice but to be honest and say “We are the government and we are in charge. Give us that stuff or you are fucked.”

Its more or less inevitable that there is a three-way information arms race between individuals, corporations, and government. Information is power. It is natural for governments to always want more information, more complete information, and more reliable information. Commercial corporations have the same instinct. You don’t have to assume they are evil; just trying to know their market. Consumers get no choice in this. You try buying a train ticket online without “registering”.  Oft and betimes, the consumer/voter just relaxes. Its kinda useful when I go back to GoCompare and they already know everything about me. But on the other hand, we instinctively bristle. They have the all power and we don’t!! The Freedom of Information Act tried to restore the balance, but its feeble.

Before you feel too powerless however, just recall that everything changed in 1976.  This is when Diffie and Helman published the key-exchange method, followed the next year by Rivest, Shamir, and Adelman’s publication of the RSA algorithm implementing the idea. Arranged carefully enough, you can make any communication completely secure. Wouldn’t this make any government terrified? What do you do? Well, partly you sniff as much as you can on the assumption that most traffic is not encrypted, or that you can read the envelope metadata if you can’t read the letter, or that you can intercept at the relay points that the internet relies on. The counter-thrust for the latter is envelope-content splitting.

But at the end of the day, the government can’t win the technology battle; they have to resort to legal restraint. An unsuccessful attempt was the Clipper Chip initiative. The idea was to generously provide to the world obligatory encryption methods which the Government could always decode. They gave up. A successful example is the infamous 1998 Digital Millenium Copyright Act. Entertainment corporations knew they couldn’t develop perfect DRM mechanisms. So they convinced the US government to make it illegal to deploy or develop technologies intended to circumvent DRM mechanisms.

My guess is that we will soon hear of plans in both the UK and the USA to make non-Government use of the RSA algorithm a criminal offence, or more generally to make it an offence to send communications that cannot in principle be decoded by appropriate authorities.

Before you accuse me of being a paranoid old hippy, let me just say that I am not even sure where my sympathies lie. I have a bristly rebel side and a  pragmatic patrician side. Viewed from above, its a fascinating struggle.


Cosmo Guessr?

May 31, 2013

I got hooked on GeoGuessr over the last few days. Apparently its the latest Internet Craze so I don’t feel so special, but I do feel a bit of a junkie. Its hard to stop. The idea is v.simple. They show you a Google Streetview image of some random place in the world. You can roam around somewhat with the usual arrows, looking at the vegetation, hunting for street signs, etc. Then you guess where you are by plonking a pin on the world map. The closer you get the more points you score.

The closest I’ve been is 62km, somewhere in Poland. Generally on a five-go game I am averaging about ten thousand points, corresponding to about 2000km out on average. I usually get the right continent… I do NOT search for things on the real Google. I decided that is cheating, although my kidz don’t agree. Sigh… the look-it-up generation.

My main conclusion is most of the world is scrub, and the rest looks vaguely like Kazakhstan.

So who is up for making CosmoGuessr? It would be easy to knock up using the Google API. Or perhaps Jonathan Fay could add it to World Wide Telescope. Actually this is likely to be the most boring game ever, because of the Cosmological Principle. Everywhere looks the same on average. Bunch of random galaxies. You would at least spot when you were at low galactic latitude I suppose, by seeing all those pesky stars. In fact maybe to work it has to be GalactoGuessr, confined to the Milky Way. Even then, could be a bit dull. If you happen to land bang on the Eagle Nebula or whatever you’d be in there, but otherwise… It would basically boil down to estimating (l, b) from star density. Or am I wrong?

Meanwhile, you can play with billion stars on our zoomable Milky Way from UKIDSS and VISTA data. Still an unofficial feature of our archives at the moment, but I am sure we will realise it sometime… While I am plugging (relatively) new stuff, check out the UKIDSS coverage maps. You can browse them in Aladin. Many thanks to our CDS chums for pushing this through the IVOA. (Hope you like the new look web pages btw). Anyway, enough VO-ish meanderings.


End of the University Part II/III : online education

May 1, 2013

Oh dear. Everybody knows you should never write Paper I unless you really are going to do Papers II, III etc. Posterity looks unkindly on failed pomposity. Back in November I wrote End of the University : Part I which was about the Browne report and a naive approach to “student choice”. I think perhaps I can count The Big REF Gamble as Part II – lots of us are investing for success, hiring new staff before the REF, but we can’t all win. These are both examples of market disruption, which may force a re-structuring. You may have various opinions on whether this is a good thing or a bad thing.

So what about good old disruptive technology? The music business got turned upside down by the internet and file sharing, and the book business is likewise in turmoil. The disruptive technology here is the ease of copying. The reaction of entrenched commercial interests was the development of digital restrictions management. Whatever you think of that, the market structures are re-forming, and we need to get used to the idea that we don’t own works of art, we rent them – or if you like, we pay for performances. Of course the logic that follows is that payment for performance should go straight to the artist – who needs the middleman?

So can the same thing happen to education? They key thing here is not ease of copying but economy of scale. Hundreds of years ago we invented lectures so we could teach 150 students at a time instead of 5. Now we can do thousands at a time. My own university has started its own experimentation with Massive Open Online Courses (MOOCs). My colleague Charles Cockell ran a five week course in Astrobiology. Forty-one thousand students registered, and five thousand survived the whole course. I am toying with another course idea myself, along with the boundlessly energetic Dr H. Well this is very exciting of course, but you start to wonder why anybody would pay nine thousand sponduliks for a university degree from the University of West Somerset when they can sit on their sofa and take courses from Harvard…

One answer is assessment and another is feedback, and the whole business of giving credit. Marking exams has not gotten any more efficient, and likewise the provision of individual feedback. Multiple Choice Quizzes are good, but not enough. If somebody can solve this problem, things will really change. This recent Guardian article reports the debate in California about whether MOOCs will allow private providers to move into education.

Meanwhile, it could well be that content delivery and assessment will decouple. Oh what interesting times.


Dogs, ducks, and sales tax

January 20, 2013

Herewith some meanderings about transparency, evil, and search engines. Only connect, as Goethe said. Or was it E.M.Forster? Better Google it.

My latest Mastercard statement (patience, dear reader) got me fuming. It had a whole bunch of “foreign transaction” fees – 2.75% each. Never seen that before. My first reaction was to start thinking about getting my card from a different bank. However, it seems that actually this charge was always there but bundled up inside the exchange rate quoted. A change of legislation now requires card suppliers to explicitly specify the fee, separately from the rate applied. So that’s good isn’t it? A better informed consumer has more power.

Part 2. On my latest US trip the sales tax thing bugged me, as it usually does. An item is labelled as $7 in the store. You are just getting the exact notes out when the store chappie says “seven dollars forty four please” and suddenly you have another three ounces of coinage. When you tell them that in the UK the tax is already in the price, they are generally mystified. When they do understand, they will say that the UK method sounds like a bad idea. You see, they want you to know that they are charging you only seven dollars;  its the government that is charging you that forty four cents, buddy. Just remember that next time you vote. So that’s bad isn’t it? I just want to know the real price please. Whoop, whoop, personal inconsistency alarm

Part 3. Read article in Observer today about Google and the future of search. Bit of a puff piece really, but never mind. Towards the end the article there were links to a couple of alternative search engines which I hadn’t seen before. One is Dogpile. Lovely name, nice look, but seems to be just an aggregation of other engines. The other is DuckDuckGo . This is a v.interesting beast. They make a big thing of (a) not tracking you, or passing on your search terms to the websites you click on, and (b) not filtering and ordering the results you get based on your location and search history. You can read about the filtering issue here. With Google, you live inside in a search bubble fitted around yourself; different people will get different results. So… this is good, isn’t it, because Google are efficiently giving you what you want ? Or…. maybe this is bad, because prejudices are re-inforced, and we don’t know how we are being manipulated?

Part 3a. The comment stream after that Observer article has a bi-polar argument about whether Google is a visionary force transforming our world, or just a bunch of good old fashioned cynical capitalist bastards, manipulating what we do to make money. Hmm. Both and neither I think. There are much easier ways to make vast amounts of money, so cynicism doesn’t look like the right explanation. I think folks at Google really do want to do groovy and visionary and positive things, and they also really do want to make money out of us. Both at once.

The Internet joined up all the pipes. The Web set up taps that could run water from anywhere. Yahoo and Google ran water through all the pipes. The world seemed transparent. We could live in the whole world at once. Google said “don’t be evil !” and lo, there was a brave new world.

Too good to last of course.


Java in crisis?

October 22, 2012

We all use Java every day : stand-alone Java applications like Topcat and Aladin; in-web-page Java applets (Aladin again); and on the server side (e.g. WSA and VSA). But now it seems there is a security crisis; serious people are telling us to disable or remove it. Wuh ? At the risk of boring the ungeeks let me explain how I just stumbled into this understanding. Its a classic tale of confusion, coincidence, and mysterious disappearances.

I am a big fan of Tiddlywiki. Its a personal wiki – a kind of hyper-notebook. You run it on your own computer, or even from a memory stick. Its very clever. Just a single html file, containing both your text, and the javascript needed to edit it. The tricky bit comes when you want to save your changes. That requires your browser to write a file onto your computer – a new version of that single html file. Thats done with Java, as opposed to javascript. You place a file called “tiddlysaver.jar” in the same directory and it does the work. You have to give explicit permission to write onto your disk of course. We ain’t nuts.

So… recently … for reasons I won’t bore you with, I wiped my Firefox installation and made a new one. (Well ok – my wordpress front page widgets weren’t working, and after many tortured days, it was the only fix that worked.) A few days later I tried to update one of my tiddlywiki notebooks. It wouldn’t save. Trawled through various FF settings but couldn’t fix it. So I tried to do my edits in Safari. Same. And Chrome. Same. Oh. Maybe the FF change was a coincidence ? If it fails everywhere, it must be a MacOS problem? Then I suddenly remembered I’d had the identical problem when I upgraded to Mountain Lion. Sensible chap that I am, I’d left myself a wee note. It said “go to the Java Preferences app and tick the box that says enable applet plugin“. So, off I goes. Hmm. No such checkbox. Must have been removed in some recent system upgrade.

Now… a few weeks back I had a hair tearing Time Machine problem. Apparently my backup was going to take 11,158 days. I spent several days fretting about this on and off and wondering what I had screwed up. Then  lo! A new Software Update was announced which amongst other things said “this also fixes a problem some users may have been having with Time Machine backups”. And yea, indeed, verily did the SU completely fix this problem. Grrr. Wasn’t me at all. Wish I’d known.

So… maybe its another Apple SNAFU. Is there a new SU ? Yup. And look! Its a Java update! But … (a) it still didn’t fix the problem and (b) the Java Preferences app has completely disappeared !! I check out the “more detail at apple support” page . This says

This update uninstalls the Apple-provided Java applet plug-in from all web browsers. To use applets on a web page, click on the region labeled “Missing plug-in” to go download the latest version of the Java applet plug-in from Oracle.

This update also removes the Java Preferences application, which is no longer required to configure applet settings.

Click on the region ? What region ? What the hell does that mean?

Then I read a bit more on the Tiddlywiki home page. It seems all the major browsers are clamping down on Java, disabling by default, and making you jump through more hoops. For Firefox there is a specific Tiddlywiki fix – a FF extension called TiddlyFox. So at least I am (temporarily) sorted…

On Chrome, if you try to run an applet like Aladin, you get a banner saying”Java(TM) is needed to run some elements on this page” and there is button labelled install plug-in. This takes you to an Oracle page which says

Chrome does not support Java 7. Java 7 runs only on 64-bit browsers and Chrome is a 32-bit browser.

If you download Java 7, you will not be able to run Java content in Chrome and will need to use a 64-bit browser (such as Safari or Firefox) to run Java content within a browser. Additionally, installing Java 7 will disable the ability to use Apple Java 6 on your system.

OK, screw that then. How about Safari ? The Aladin applet seems to run ok. But Tiddlywiki does not. This is because it wants to write to your disk. Some documentation on the Tiddlywiki site told me what to do … open Safari preferences, go to “Advanced” and tick “Show Develop menu in menu bar”. Then a new menu items appears in your menu bar called “Develop” with options for grown-ups. (Don’t forget to open the door marked “beware of the leopard”.) Finally move down that menu and mark “Disable local file restrictions”. Yay !! But guess what. That menu item no longer exists. Somebody really doesn’t want us to do this.

Finally … I started roaming around the interwebs the way you do, seeing if other folk had the same probs. I stumbled over this nice Java Tester Page. This is where I first saw the scary words “Java Security Flaw”…  I then followed the link to this article by Michael Horowitz and things began to make sense … sort of.

It seems there are serious security flaws that won’t be fixed until February 2013. Horowitz says

Java is used by both installed applications and websites. If you only need Java for an application, disable it in all your browsers. OS X users on Lion and Mountain Lion had Apple do this for them (more below). Windows users in this situation may want to consider the portable version of Java available at portableapps.com.   If you need Java for a website, enable Java in a browser used only on the site that needs it. For all other websites, use a browser that has Java disabled.

I can remember back when Java was the next big thing. Now, it’s all but a curse word.

Jeez.  Gordon Bennett. Is it really true ?


Museum of Hoaxes

April 1, 2012

Now midday has passed, if there is anybody still trying to download the Planck data from Wikileaks, I can gently point out it was a all a Poisson D’Avril by that naughty Strudel chappie. Stu has lots of other astro-geeky stuff, so check it out.

Planck leaks wins this years April Fool prize for me – funny, near the knuckle, and had me taken in but bemused for a little while as it was the first one I came across. But I also very much enjoyed Jon Butterworth’s “evidence for String Theory” post. Just slightly nutty. But thats String Theory for you. Outside the science world, Google seems to have taken over the role of the BBC as official purveyor of April Fool jokes. I liked “Introducing Gmail Tap” and the “Google Street Roo“. My kids really liked the Quest View in Google maps. (Check it out before it goes !)

Is this an April Fool’s joke ? There seems to be genuine doubt !

So I was tempted to drift through the InterWeb and collect lots of old classic April Fool jokes, like the Spaghetti Harvest and the Flying Penguins and so on, but I stumbled across a web site that has already done it beautifully – the Museum of Hoaxes. As well as the Top 100 April Fool’s Day Hoaxes, it has lots of other hilarious stuff. My personal favourite is the bloke who convinced Alaska residents that Mount Edgecumbe was going to erupt, by setting fire to hundreds of old tires in its crater. The Museum of Hoaxes web site seems to suggest it has an actual physical presence on San Diego, but maybe they just photo-shopped those pictures…


The decline of email

March 14, 2012

I like the new version of the site stats for WordPress blog authors, especially the geographical stats. I am more international than I realised ! As usual I am a tad behind Telescoper. Apparently he reaches about a hundred countries and I reach about fifty. Meanwhile in Twitterland I have 759 followers. Gulp. Who are all these people ? And a steady stream of people are joining Facebook and want to be my fwiend. Everybody is speaking to everybody else ! Its ballooning out of control ! But … I seem to get significantly less email than I used to …

Is this a well known thing, or just me ? And is it connected with the rise of Blogs Twitter and Facebleuuchhh ? Like most scientists, I started using email in the early 80s. At first it was mostly inside my own institution (RGO at that time). Then there was growing chatter between Starlink nodes. Next, thanks to Decnet and SPAN, it started being possible to send emails to Oz and Yankland and so on. That was magic. Then TCP/IP and SMTP took over, every scientist had an email address of the same form, and the world really became transparent.

The next bit wasn’t so groovy. Microsoft made email so easy (Outlook was one of their best products) that  it was discovered by our University administrators. Suddenly they could pester you and demand stuff thirty times a day. Then your auntie and all your cousins found out about email (Outlook Express…). Clearly, email was going exponential, and it was getting to be a serious problem. I noticed that senior people learned to write three sentence emails, whereas postdocs and administrators sent you six screenfuls of stuff.

But now the tide seems to be going out. Natural feedback cycle ? Everybody going bonkers on Twitter instead ? Or have I just become less popular ?


Cosmic Convergence

January 5, 2012

I have a cute diagram for you. But first, as Frankie Howerd would say, The Prologue. Some background is necessary in case you get the wrong take-home message once I unveil the interestin’ picture…

Periodically somebody frightens us with tales of how The Data Deluge Threatens Science. (Yes, guilty.) Cynics will suggest that Moore’s law runs even faster, so computers will always be good enough – whats the problem? Actually, I think the interesting things are those that are not improving exponentially – last mile bandwidth, disk I/O speed, and human resource. The first two are a bit of a bummer for individuals, but fixable if you are a pro data centre – tune your TCP buffers, open multiple channels, hang thirty discs off your motherboard, etc. So this drives us to a service architecture. But so does the third thing : people effort. Developing interfaces to the data, and curating it as it comes in, takes work. A lot of this scales per archive rather than per bit. The real problem, and why we need the VO, is the number of archives not the number of bits. Its a Tower of Babel thing.

Enough of the VO lecture. The size of each major astronomical archive certainly is growing fast. But so is the size of a typical hard drive. As part of some work with Bob Mann and and Mark Holliman, I was collecting some data on these things, when suddenly it occurred to me not just to talk in general terms but to actually plot one on top of the other. Exhibit A therefore shows (i) the evolution of the size of PC disks, from the wikimedia commons page here , and (ii) the total size of the ESO Science Archive, divided by 200 – data kindly provided by Paolo Padovani. The step function in 1999 is real – its when the VLT switched on. Otherwise, they track each beautifully. Over two decades, while volumes have increased by factors of tens of thousands, the number of PC drives needed to hold the ESO data has stayed the same within a factor of two.

I guess both of these things, like a number of others, are loosely driven by a combination of integrated circuit technology and economics. But the fact that they are so close seems surprising. Maybe there is some kind of weird technology central limit theorem thing going on.

Comparison of growth of ESO archive with hard drive capacity. Dik data used under GPL.


Ancient VMS vs Unix joke

September 4, 2011

My post yesterday (Unix hair tearing) brought a couple of ageing VMS fans out of the wooodwork. Suddenly I remembered an old joke. Its long enough that I will put it in its own post, rather than in a comment. This joke was going the rounds about eighteen years ago when Starlink switched from VAXes to Unix machines, and suddenly we all had to get used to commands that seemed to be compact but incomprehensible gibberish, such as man, ps, rm, biff, ls, cat, etc…


A young scientist has an urgent job to finish, but disastrously the whole departmental network goes down apart from one ancient VAX. He hears there is an old-timer a few corridors away who still knows how to use the VAX, so he rushes down, bursts in, and insists that the old guy shows him what to do, because, you know, sorry, but this deadline is really important.

“Calm down”, says the old guy, “what do you want to know ?”

“Well, ok, for instance, how do I edit a file ?”

” You type EDIT FILENAME”

“Right, fine, suppose I want to make a copy ?”

“You say COPY FILENAME1 FILENAME2″

“Err, right, ok, now suppose I need to delete the file ?”

“You say DELETE FILENAME”

“Ah, right, right, err.. now what if I want to print it ?”

“You type PRINT FILENAME”

“But what if I just want to see it typed onscreen ?”

“You say TYPE FILENAME”

“What if I need to figure out what a command does ?”

“You say HELP COMMANDNAME”

“Ummm.. umm…. suppose I want to create a new directory ?”

“You use CREATE/DIRECTORY”

“Ok, ok, but look – how the hell am I supposed to remember all that ?”


The list goes on. Check out this comparison table of VMS and Unix commands.


Unix hair tearing

September 3, 2011

I have a Mac because you get nice easy-peasy pretty stuff and proper Unix all in the same machine. Some days when you crack open the terminal you feel a great sense of power and flexibility. Other days you want to strangle the people responsible for Unix. (Anybody for a bring-back-VMS campaign ?)

I use iTerm rather than the Mac-supplied Terminal. It has stopped development, so today I updated to the successor project, imaginatively entitled iTerm2 (strongly recommended – check it out). Suddenly a whole bunch of my alias-ed commands bombed. For example, to check what tunnels I have running I type “tunnels” defined as

alias tunnels=’ps -uaxww | grep ssh’

This gave an error message along the lines of “no such user” whereas when I re-started the old iTerm, it still worked. I narrowed this down to “ps -u” behaving differently in the two apps. In the old iTerm this adds a column to a ps listing which shows the UID owner of processes; in the new iTerm2 the -u switch is for filtering by UID, and so the command expects you to specify that UID. (The regular Mac Terminal behaves the same).

I thought I must be going crazy. In both cases I was simply running a dumb terminal, with the same login command, running the same shell, on the same hardware, running the same version of unix, issuing the same very simple unix command. Wuh ?????

After much hair tearing and some Googling, I found this blog post. It seems that the problem is an environment variable called COMMAND_MODE. When iTerm starts a terminal it sets this to “legacy” whereas iTerm2 and Terminal set it to “unix2003″.

Unix03 is an attempt at Unix standardisation by a body called the Open Group (see here). The other standardisation attempt is POSIX. According to this article no Linux or BSD vendor has achieved full compliance.

So… typical dilemma … do I leave my stuff as it is and set COMMAND_MODE=legacy, or do I try to update my stuff ?

Anyhoo. Sorry for boring you, but ain’t that just bloody typical unix ?


Follow

Get every new post delivered to your Inbox.

Join 116 other followers