a whistle blower revealed that the NSA claims within the US
intelligence community (IC) to have direct and exhaustive
access to the data servers of many large US-based digital
services. Most reactions that I’ve read are either from
privacy advocate who would be flabbergasted, if they were
surprised, and a small majority of the US public who sees
that as a reasonable solution to terrorist.
The problem revealed many issues, none the least of which is
that the IC didn’t spot someone with so much access as
considering to blow the whistle: discussing moral issues is
very important and relevant, but please take a second to
imagine What if that guy defected?
This to show that they are massive problems within IC, mostly
political: a criminal sense of entitlement and rectitude, a
lack of self-criticism and endogamy.
That goes beyond the IC and into the US Army: Abu Grahib
was an example of that entitlement gone wild. Another
less disheartening example — and, importantly, described
as relevant by the authorities — would be a documentary
Restrepo. Soldiers carry cameras while keeping the peace
in an Afghan valley and the filmmaker later simply edited
their rush into a film. Their constant, open, transparent
despise for the local population reeks through every hand
gesture, every sentence.
I suspect that their relative silence is habitual, but might
be an outrage so great it can only be contained, or,
hopefully a significant reboot.
What are my intentions with this piece?
I want to challenge what appears to me as a consensus that
privacy should be respected
More specifically, I believe that computers would be a
trusted intermediaries between two steps that can now be
distinct — by were not with human inspectors:
collection of information and
revelation to a human prejudice.
There are no real contradictor
I’m not really, but we need one
I only hate one thing: “You disagree with me
therefore you are stupid” Learn immensely from
understanding coherent opposing points.
Main reason why sophist taught to argue it:
making cogent is best way to make sure you got
it, rather than ridicule it as a in-group;
Point was not absolute relativism, but humanistic
concern that one might be wrong, and exercise
against abrupt defensiveness
More than corrupt Congress, Red vs. Blue overall
political and geographical structure, this is the
problem of our time, on Social media & others
Better than “Who cares: those are bad guys—I’m not” a
truly misinformed approach
Abuses are far too easy to set up, as demonstrated by
the history of US secret services many criminal abuse
of the law
There is a track record of innocent people being
tortured to death, including the anal rape of a then
9 year-old boy in front of his father.
Yes, the IC includes as its most central members
documented child rapists and their apologists who
roam free. I did mention a cultural dissonance
earlier, did I?
Actually, that particular case, and the use of
torture makes a compelling case against secrecy in
extreme cases: rather than interrogations enhanced by
violence damaging to the truth, an interrogation
augmented by network counterpoints would help spot
contradictions. I’m not sure anyone needs a big
computer and too much data for that, but if need be:
my point is that idea is better than physical
violence, a currently repeatedly used technique,
against all international treaties.
Complete digital disclosure seems a far
preferable option — if any differences are to be
noted, it works sometimes
Obviously, that needs to be decided by a judge
who has a strong understanding of due process
I can’t imagine why anyone would imagine I’d be a
terrorist apologist, but I came too close to several of
those famed events, too many to think that fighting
terrorism is a costly luxury.
Both brothers were threatened
in 95, so anyone assuming that it started in 2001
has my sincerest contempt.
More recently, in Boston—real problem was the
completely excessive reaction of the Boston Police
myself above Victoria with security consultant
I still think car accidents are a thousand times more
Many friends have SSSS, Arabic names
The current system is not good, far from it, and
trying to improve it would be welcome.
One potential improvement would include algorithms —
roughly what is known as machine learning, and
I work on those, I’m harshly critical of machine
learning, and I’m all too familiar with the
limited capabilities of network algorithmic
outside of the IC, so this is very far from a
current recommendation — an exercise in theory
I know that IC has been working on network since
identifying 9/11/2001suspects cell structure (to
figure out how many masterminds were outside of
the planes, presumably alive and which); I
suspect that IC has better software solutions,
and a far less critical approach to machine
learning; actually, I know they do: check th
elite information that trickles about the company
I want better understanding of those issues
Eg : status for lawyers, Glenn Greenwald, friends,
Develop those for companies, why not government?
Grey information, but with better return can
Secrecy is useful, but certainly not what we saw:
large digital corporations are transparent about
their practice, and that is a good thing.
Many security consultant would prefer
Problem: culture of security consultants in general,
and the industry—they deem opponents negative
So big that fails to have proper internal controls;
always the faint possibility of a whistle blower that
would fix that—basically Julian Assange’s argument
Not really a hero
at least not a knight in shining armor:
not very brilliant
Fairly basic mistakes
got the essentials right
not a total victim like Manning
Depressed, gay during DADT, tortured
Was in control, considered if could have done or
not, took initiative
very partial leak: focus on the core of the
Reality might be hubris
or it could be sleeper agents within all six
Last point (sleepers) is the scariest:
Large corporation with that responsibility
can have leakers of that magnitude
Same for ‘IC’
(Intelligence community), the network of
CIA, NSA, other agencies and the many
What is the IC?
It’s a coherent network of a dozen ‘letter soup’ agencies and
private contractors who work for them, all holding classified
accreditation; it’s hard to circle as most of the contracts
involved are secret, but it is increasingly private.
One interesting aspect of such a resounding echo chamber is
it’s ability to follow phases: lately, it’s been a
fascination for applied graph theory, combined with extensive
records — ‘Big Data’. A company representative of that trend
would be Palantir.
That fascination is old: it started when Met those while
talking to a member, during my PhD
Most of them can be seen watching
“Prism” an implementation and a concept
No doubt that Prism is squarely against the US Constitution
all those briefed on it have a duty to disobey;
What if technology has made the constitution slightly
However, overall gathering is not necessarily a bad thing on
Many threats, abuse — too long to list, scary and
rightfully the main reason why this is not a good option
for the moment
Potentially changed by strong cryptography and human
None of those are safe, as demonstrated by
Best is probably the eponimous Minority Report.
Intellectual exercise in changing the constitution
Proper judges, trained on those models, capable of
assessing the many false positive
Having judges capable of understanding technology
might seen even more unlikely, as demonstrated by the
criminal incompetence of [Hadopi]
My idea is closer to the computer in Person of Interest,
without the creepy controller who try to take over.
OK, that is not possible: extreme power corrupts
extremely, etc. This is an intellectual exercice: I want
to list what is needed to challenge the principle behind
privacy that information is revealed and known or not at
Basically, exploring the idea of grey information,
steganography, enlightenment, and its ramification
Ads in GMail do it quite well, with a spectacularly lower
cost for mistakes, so there seems to be a possibility.
What if a computer could,
with a lot of help from analysts (notably is generally called
‘supervised learning’ which is actually just handing it a
sample) who never really see personal information find out 50
suspicious cases, including the 10 most threatening plots
around the world?
Of course, it would have to be very intrusive in its
collection of information
however, the revelation would be minimal
I know what I’m talking about: that’s highly possible, but
requires dedicated software
isolate concerning cases?
biggest problem: false positive
actually, false negatives too: those are terrorist accidents
like London, but how to treat those is obvious: don’t — real
concerning moral problem is can we train analyst in making
sense of false positive without a clear moral compass?
Fiction provides us with a clear response when you have
an absolute moral compass: Person of Interest. If ugly
people are always violent, morally corrupt, and cute and
timid people are always adorable, there is no problem.
Human access to information limited by the machine and a
chain of proof
Generating algorithmically a chain of proof that could
sustain legal scrutiny, and the many layers of the law is
actually a very complicated problem. A friend once said
she was able to — I still have a hard time believing she
did — let’s say she did.
Then the “anything, anywhere about any body” wouldn’t be
the same when it refers to
a random Booz Allen employee
storage — because that data is already stored anyway
A big debate to be had about whether the best IT security
professionals are at Google or for the NSA.
Google stores information with fairly informed consent, so as
long as they respect cur process and industry standards, they
can safely claim their storing it is fine
This post is to argue a government could have a
Actually, that’s the case of Google’s if I understand
Of course, if the government is corrupt, there is a
problem, but the only solution that I find to those is
oversight, and that has failed to massively in the US
lately that it’s hard to just assume that makes sense.
Let’s say the US signs The Hague convention. Once again:
this is a ridiculous attempt in theory, and an informed
counter-point to the wave of hostility to Prism
Truth is: in reality, at Google’s, access it permitted to
very few, mainly to assess major legal issues (read: child
pron, and potentially terrorism although I doubt a Google
specialist would be able to find suspicious what I presumed
are encoded conversations)
Those have to associate every query with a case file, and
that is seemingly very closely controlled
There was one abuse—sad, spotted, which at the time
was deemed a proof the system worked
Inventing a case file is potentially easy, or other
abuse—let’s assume that monitoring officers for those are
as many, incorruptible and monitored as the first layer.
Basically, ask what the back office and the
Inspection Générale of the Société générale did for
the team Delta around Jérôme Kerviel and do not do
Filling in details as to why you need is tedious (and I
understand analyst want to avoid that and open bar sounds
more appealing) but it is relevant and necessary, unless
other means of information are illegal, frown upon, there
are suspicion of a mole, etc. All problems that would
actually greatly benefit from proper documentation:
documented illegal spying, form the secret service would
be used to justify more extensive laws, etc.
The real issue was not as much that so much information was
gathered, but that it was considered a good idea to boast
about it, and let so many people share it, that one thought:
Yes, I have seen such database—general attitude is fairly
cavalier around them
In my case, the people in charge were fully aware of
the implications, so much so that it drove one of
them into very scary problems.
There is a lack of software to help figure out things
from it without breaching privacy
I wanted to so some calculations of Facebook graphe —
never could, no one at Facebook would take in request
Actually fairly simple at a basic level: would any number
printed out be different if one or two values where
* [ ] Moral selection bias
One thing: Intent really matters in perception.
I’ve seen several scientific papers published with
interesting social details, not the least amusing one
this map of Facebook relations — all of which, to a
neophyte in large database would need to have access to
an extensive personal network. All have been acclaimed as
futuristic, beautiful, interesting visualizations.
Formally, most of those were made with the same meta-data
that many now claim the government shouldn’t have access
Graph of Belgium
Positions in Paris, Rome during concert
Map of NY by international calls
Difference is: those were stored extremely detailed,
indeed, but processed and shown only as collective.
The public imagines that there is something, either
soft (a habit, an institution, personal ethics,
research goals, lack of time or interest) or hard
(law, code, technical means) that prevented the
scientist from accessing such personal information.
Truth is: I’ve seen every possibility, generally
without real checking from the scientists. Spying
on your significant other because you have
extensive e-mail structure doesn’t really make
sense when he or she is the only person that
doesn’t yawn when you talk too much about how a
triadic clustering at twice the complexity
factor, that’s a good (algorithmic) bargain.
Plus, you know far too well that the very active
relations you unearth that way are too often
spammy joke chains, gossipy non-sense and people
arguing to get out of those.
Really interesting relations — just like that
intelligence officer whom I met once, with whom I
talked for two hours at best, at a conference
where he was actually using another grad
student’s badge, so he most likely never had his
name on the same document as me — those require
to understand context, and how that person could
change your perspective, and have.
I guess I’m not convinced about an algorithmic
solution just yet.
Arguments of the CI
“Publishing details would harm national security”
Does anyone else has a problem with the word “national”
in that sentence? Seriously, that’s how you thank the
Coalition of the Willing?
No, this is a question of democracy, and I have seen
nothing so far that, as far as my imagination allow me to
think like a terrorist, would change my attitude
The biggest problem that I read from the many reactions
is the idea that if some people can see anything, than
they will. No: your life might not be purely legal, it’s
most likely not very interesting too.
Included in the word surveillance
Certainly true for the Stasi
About the technology
There has been a lot of fantasy about what is possible: from
IBM first typewriters that helped write down list people in
Concentration camp to very fancy recommendation engines,
there are worlds of sophistication — and all have been
described with the same
Nee dot be clear as to what those software can do — as
far as I can tell
Associate network equivalents
One person uses two cell phones, one official and
theory is that if he uses it to call
everybody but one with one, and the same list
plus one very bad guy, he is trying to hide
Seriously, using your burner to call your contact
book? That doesn’t make sense: what is used is
the network of telephone, antennas at a given
time: if he carries both at all times, you can
associate the two. So do not carry your burner
and live in crowded areas. Not sure that one was
hard — even if the algos are actually fairly
expensive to run live, and the CIA hates that
people who are not the Agency can to that and
burn their own agents, who carry their burners
with them, and can have atypical movements.
Find out the real frontiers of a group of
friends; helps to sort ‘I know him from the
mosque’ and ‘We are part of the same cell’
Seriously, I tune those: those are very
sensitive. Not very convincing in general. Most
cells have hierarchical belonging, so whomever is
on the frontier probably is so for a reason
That one is very, very greedy—only one who could
justify the extent of the server farms that has
been mentioned, with decyphering. Others can be
run (and have) with simple servers.
Can isolate methodology and cell structure
I do not believe it is efficient.
All of those can be easily disrupted with something
as sophisticated as compartmentalization — as
practiced in Sleeper cell, the very good TV series.
Interestingly, that compartmentalization is both
for internal reasons — mainly fighting embedded
agents — and because the principles used to track
9/11 terrorist were proudly explained in the
NYTimes after the fact. Intelligence community is
its worst enemy.
If it was meant to detect terrorism, how would that
Associating commercial transactions to known threat
bomb making elements, finding large cash transaction,
or repeated small purchases
Not sure how to train that one properly, other than
with ‘scenarios’ but could help especially if it is
handled the same way credit card handle fraud
detection: “We would like to thank you for you recent
purchase of a boat. By the way: security line is …”
Far more likely to be relevant for organized crime.
Network of scammers, notably for delivery post boxes of
goods from stolen credti cards (a problem of systemic
nature by excellence)
A camera doesn’t see: it records.
There is no human judgement behind most security cameras,
and that’s frustrating, odd and new. As such, many do
stupid things in front of those cameras, but they are
anticipating that there is someone watching, but that
someone is powerless—an inversion rite of sorts.
For someone to actually see the content of that record,
you need to have someone who argues that there is a
problem — convincingly enough to have that tape
considered as a trace and a certain moment isolated.
It doesn’t mean privacy shouldn’t be respected; it means
that the standard for removing the protection that an
individual expects from the consequences of an authority
watching, considering, analysis as such, his records
needs to be raised high.
One way to have a high standard for watching, but
still have information is robot-analysis.
Human looks are partial, interpretative, normed: they are
judgmental. Combined with agency over you, it leads to
always uncomfortable and sometimes bad things. That’s
what the concept of privacy is trying to protect us
against. When that fails, the idea that one has the right
to defend, and offer an alternative explanation from
traces is a sign of how creative can get story telling
from partial evidence.
Should the disclosures be extended to non-imminent
Cf. road auto-ticketing devices
Correction via feedback: “those are dangerous
clerics” — well, thank you for telling me, I wasn’t
sure about they actual policies
I’ve been there: having algorithmic feedback is
very useful, even for live decisions like
following a cleric.