The Danger To Privacy Isn't Corporate Data-Mining Or Governmental Surveillance – It's Both Combined

The past year, something of a pointless debate has broken out – whether it’s the governments’ ridiculously-broad surveillance laws that pose the greatest threat to our future liberties, or if it’s the corporate gluttonous collectors of data like Google and Facebook that pose the greatest threat. It’s neither, because it’s both combined.

There’s a difference in culture on different sides of the Atlantic. In Europe, people tend to look to governments to protect them from abusive corporations; in the United States, people tend to look to corporations to protect them from an abusive government.

There’s certainly reason to seek protection from both these days. Let’s start with corporations.

It is said that a Visa executive – as in Visa, the credit card system – can predict your divorce one year ahead of yourself, based on your buying habits. There’s a recent telling anecdote where Target, the chain of stores, knew that a teenage woman was pregnant before her parents knew. If our purchase habits give away our life and privacy to this degree – imagine what Google or Facebook would be able to predict, if they wanted to?

Imagine you were diagnosed with a horrible and rare disease – like pancerebral aposcrupulosis, a normally rare disease with above-average occurrence in the political profession. Who would be the first to know about your rare and horrible disease after your doctor and yourself? Not your parents, not your children, not your spouse(s), not your close friends. Google would be the first to know, as you would immediately sit down to learn more about your diagnosis. (Unfortunately, in this particular degenerative disease, patients usually lack awareness of their condition.) Google’s ability tap into what we’re thinking about is probably the closest thing we’ve come yet to actual mind-reading.

Facebook isn’t a threat so much in terms of what you’re thinking, but in terms of who you know. Your patterns can be predicted from their patterns.

However, neither Google nor Facebook have any particular interest – nor indeed any ability – to knock down my door at dawn with a dozen agents with riot gear and automatic weapons just because they don’t like how I use their service. Fact is, they even have a strategic interest in preventing themselves from doing that to me because of my relationship with them: if that happens to one person, it’s a signal it can happen to any one of the billion people that use Facebook or the billion people that use Google (June 2011), and that would seriously harm the corporation in question.

So let’s instead jump to what governments can do. Many enough countries now have blanket wiretapping laws in place that let them wiretap all their own citizens’ net traffic, all other citizens’ traffic, or both. (This would have been absolutely unthinkable just a decade ago.) Additionally, the security services generally share raw data between them – so just because you’re not tapped in your home country, that doesn’t mean your local security service doesn’t have a copy of everything you’ve ever typed or sent online; it can be tapped anywhere.

Governments are not only able to knock down your door when you behave in a way they don’t approve of. They even like doing exactly that, and see it as their job. This is something of a problem, and quite a severe one.

The obvious next step to prevent the governments from this outrageous intrusion – mindreading followed by door-busting – is encryption. Encrypt everything and everywhere. Facebook seems to have gone encrypted (“https”) by default, as has Google (while I’m not sure of the pervasiveness of this default setting, both of them only talk to me over an encrypted connection). When you encrypt, the pancerebral aposcrupulosis of governments and lawmakers becomes significantly less damaging – almost ineffective and irrelevant.

So I argue that the danger lies in a combination of the two powers: the real danger lies in governments taking themselves the right to not wiretap you directly, but to forcibly extract data about you from Google and Facebook (and the likes).

When that happens, you have the closest thing we’ve come to mind-reading, combined with equally complete knowledge about what your friends, colleagues, and family think about, combined with the ability and desire to break down your door at dawn if you challenge the status quo too much.

To add insult to injury, governments have the ability to create this combination silently, denying Google, Facebook, and Twitter the ability to tell you that the data extraction has even taken place.

That’s what we should be worried about – not governments or corporations. It’s and.

Rick Falkvinge

Rick is the founder of the first Pirate Party and a low-altitude motorcycle pilot. He lives on Alexanderplatz in Berlin, Germany, roasts his own coffee, and as of right now (2019-2020) is taking a little break.


  1. Alex

    Well, Yes, data isn’t dangerous, so from that point of view is data mining not dangerous.
    And governmental surveillance per se isn’t dangerous either, as long as they keep themselves to their scope.
    However, I also don’t think they’re necessarily dangerous combined. It’s the intentions that make it dangerous, though you do say this in your article.
    When governments widen or even forget their scope about what to protect, deny, allow, and so forth, regulations flow out of it that can negatively affect everyone.
    Historically, governments existed to 1) Protect their citizens from foreign nations, and eventually 2) make sure their citizens don’t hurt eachother directly.
    Now what I see happening nowadays is that governments are slowly working towards 1) making sure you don’t hurt yourself and 2) “Entities” don’t hurt eachother. Entities can be anything from individuals to corporations. Additionally, a lot of the time they don’t really know if they should do something with it so they try to out of goodwill. These 3 new things are dangerous, they overrule what governments should keep their hands off of.

    1. printersMate

      Data mining is always dangerous to some degree, because it answers the question asked, without providing the context for a correct interpretation. If it results in the wrong adverts being presented to you, then the advertiser suffers, if it result in you going to jail as a terrorist when yopu are noy then iot hurts you. Th definition of terrorisn can be a bit flexible such as :-

      Marielle Gallo who chaired one of the committees, and supports the treaty, made some interesting comments over this :-
      “We’re supposed to represent citizens, but since they are busy with other things, we are supposed to think for them!”
      “It’s not only a disinformation campaign. It’s a soft form of terrorism that frightens people. People are being scared. It’s a fantasy. ACTA has become a fantasy. And that, that’s propagated by the whole Internet network.”

      1. Alex

        Data mining isn’t dangerous, it’s always the intentions.
        Saying information is dangerous is dangerous, it’s the exact reason politicians are fine with sites like the piratebay being blocked, while they only have magnet links, it’s a database with information. Just because they link to where files that potentially infringe copyright can be found doesn’t mean that they do; They just present the users with the information, it’s up to the users to do something with it.

        However when you combine “Humans” and “Information” you’ll always get an interpretation, an opinion, and actions that flow forth from it, that’s certainly true. But is it the fault of the information that humans take these actions..?

        1. Ano Nymous

          The wrong kind of information IS dangerous. I said it. Dangerous. Dangerous. Dangerous. Because if certain information and certain humans together is dangerous and you want to get rid of the danger, you either have to get rid of said information or said humans. And getting rid of humans is the very definition of dangerous.

          So the problem is of course only WHAT information to prevent publishing of. Information that you want to publish – no. Information that you do not want to publish – yes. Except in a few very special cases.

          Information that is very easy to use for extracting information that the subject doesn’t want to be extracted (databases, facial recognition software, spyware) – should also to the fullest reasonable extent be prohibited.

          Look at it this way – everything is information. If every kind of information was open and free (currently ignoring technical and human limitations), I and everyone else in the world would know your name, social security number or equivalent, your race, religion, sexual orientation, and everything you maybe don’t want everyone to know – because that is also information.

    2. Andy

      “Historically, governments existed to 1) Protect their citizens from foreign nations, and eventually 2) make sure their citizens don’t hurt eachother directly.”
      That’s a good one. Keep ’em coming.

  2. Peter Andersson

    Super glue consists of two separate parts, each one harmless by it self, but when you open each first there’s the toxic vapors, then when you combine them you risk getting everything it touches stuck real bad really fast in a way impossible to solve.

    Use this metaphor as you see fit.

    BTW and half OT:

    1. Rick Falkvinge

      Also, if you take the cap of the two tubes of the two parts superglue and use each cap to close the opposite tube, they’ll both be closed forever.

      1. Björn Persson

        We need to come up with a way to do that with corporate data mining and governmental surveillance. 🙂

  3. Andy

    First instance of ‘cerebral’ you wrote ‘celebral’, perhaps implying you wouldn’t want to invite a politician to your party. 😛

    1. Rick Falkvinge

      Oops, thanks. That spelling error kind of spoiled the subtle point. Fixed.

  4. printersMate

    Corporations are going to collect personal data, and governments, and in particular the police and security services are going to want access to it. However the biggest threat to privacy is the use of the likes of Facebook and Twitter for private communications along with the tracking these and other services carry out.

    Encryption of communication with such services does nothing to prevent access to the messages as they are decoded by the service. It can help to protect Email communications, although who and when a message was sent can be captured.

    The answer to privacy issued lies in greater use of the Internet’s base ability as a peer to peer system, by using federated services between privately owned servers. This allows control of who is allowed to connect to a server by use of public key cryptography to both authenticate to a server, and sign messages.

    Cryptography can be used to protect both ‘remote’ access to a server, and also the interconnection between servers. Also multiple servers can be used for different levels of privacy, e.g. family or friends.

    A static IP is desirable for this type of connection, but a dynamic DNS service could be used. As the ISP can log where connections are coming from this makes little difference to privacy. Not however this is not intended to be a public connection, the existing services are used for public presence and communication.

    This approach can also be used for other private or semi private servers for clubs and other associations. for Such low capacity servers, cheap machines such a pogo plugs, raspberry pies etc. can be used.

    Mutual agreements can be made to provide off site backup, and shared document editing and other project can be supported. In effect this approach allows private networks to operate over the Internet.

    This would require government agencies to take legal actions to obtain private data. Also they would have to collect and catalogue the data to carry out any analysis, rather than letting corporations do this for them.

    This leads towards a system architecture like FidoNet. or <a href = "; UUCP which can run over internet connections, or if required dial-up connections. If messages are batched into compressed files, even traffic analysis becomes more difficult.

  5. Ian Farquhar

    Unfortunately, the “encrypt everything” approach will fail as it relies upon SSL (HTTPS), which uses a core technology called PKI (Public Key Infrastructure). Basically, this technology “trusts up” to a series of “certificate authorities”, which your browser (ie. you) trust implicitly and unquestionably. Consequently you’d think browsers would make a big deal out of asking you for your consent to trust these many entities distributed all around the world, including in authoritarian states like China, but instead they all ship 50-100 default CA’s buried in config menus and expect users to disable any they don’t actually trust. I’d say less than one in a thousand users even understand how it all hangs together.

    (Bear with me, we need to go a bit deeper.)

    When that padlock goes green or blue, indicating that you’re connected to a genuine site (rather than being connected to some government or corporate “man in the middle” traffic interception gateway), it’s because the certificate authority has given that genuine site a cert which verifies that.

    But what if a Certificate Authority – which is just a company or government agency – is pressured? Or if they’re hacked (example: Diginotar was hacked by Iran to support SSL interception), or if formidable government cryptanalytic capabilities are used against a commercial agency to break their root signing keys around which the whole trust calculation hinges? Well then, the whole trust model breaks. Entirely. Irrevocably. And almost without any hint it’s happening.

    AND BY DEFAULT, YOUR BROWSER – ie. YOU – TRUST 50-100 different organizations which include both companies and local and foreign government agencies all around the world.

    Does anyone see a problem here? SSL isn’t false security: the protocol is very cleverly designed. But the whole concept of PKI around which is revolves has a poor design, and it fails catastrophically.

    SSL is a house of cards. Is is broken. It’s fundamental trust model is ridiculous, and current browser implementations fail to even highlight high-risk transitions, such as sudden changes of CA (which could give a clue to interception). You can band-aid this to an extent by the use of tools like Certificate Patrol (a Firefox Extension), but it’s not a perfect solution.

    What we need to do is to support the development of better privacy-enhancing protocols and tools. It is critical that these tools be practical and widely deployable, so that they become ubiquitous, as if they don’t the very use of them will see the user seen as suspicious by the government agencies who consume this blanket invasion of our privacy.

    This isn’t a theoretical possibility. SSL Interception in the corporate world is mainstream technology, with companies like Bluecoat, Cisco (Ironport) and Microsoft supporting it. Even the open source “Squid” proxy does it, with the mitmproxy feature.

    At the higher-end, supporting government-level interception, are companies like Netronome and a lot of shadowy others which exist purely to support the defence and intelligence communities (almost always filled with ex-government spooks).

    What I would like to know is raised in part of your post above: what has changed in the last ten years? In the 70’s, the US National Security Agency’s misbehavior resulted in the Church Commission, which oversaw sweeping changes to the operation of NSA. Nowadays, what that commission found is unlikely to be even reported by the committee, despite the massive and systematic abilities of these agencies and private organizations to massively invade our privacy. The quick answer is “terrorism” and “9/11”, but I believe that’s trite and uninsightful. Something has happened to make people comfortable with the largest and most systematic loss of privacy and freedom the world has ever seen.

    But what?

    1. Björn Persson

      Postulating that everybody trusts a plethora of CAs, most of which most people have never even heard of, is obviously a flawed design. If only one of the CAs is compromised, then the whole system is compromised. If we have to postulate that everybody trusts some authority, then it’s better to have only one such authority, not dozens of them. DNSsec is an improvement over the X.509 infrastructure in that regard.

      1. printersMate

        If one of the multiple CAs is compromised then their signing certificate is revoked, but opther signing certificates remain valid..
        With a single CA, when they are compromised then the whole system is compromised.
        In the first case, dmage is limited to clients of the compromised CA, and tthey can go to another CA to gain a new valid certificate.
        In the second case the whole system is broken, and no certificate can be trusted. Recovery is extremely difficult to impossible..

        1. Björn Persson

          Wrong. If you have compromised one CA, then you can issue a seemingly valid certificate for whichever domain name you want. Then you can redirect your victims’ requests to your own malicious website, certified by the compromised CA. You can impersonate any website in the world, regardless of which CA that site is actually a customer of. That means that the whole system is broken.

    2. printersMate

      The problem with any cryptography system is dealing with the key exchange required to enable the use of encryption. The use of CAs is a reasonable approach to deal with communication between strangers who have not met in the real world. It does rely on the browser vendor managing the root certificate properly, or the user doing this. Note a problem exists in validating that a root certificate was actually created by the claimed authority. It is a significant improvement on relying on certificate offered by a site without any validation by a signing root certificate.

      This system is not intended to deal with privacy issues, but only communication security. The tools to enable privacy exist in PGP and other private/public key systems. The problem with these systems is that people have to meet to exchange keys, or trust published public keys. The latter comes back to trusted publishers of keys.

      Too a large extent people have given up privacy for the convenience of services like Facebook and Twitter. Solving this is an education problem, and also requires making home servers easy to install and use. This is not helped by ISP’s only offering dynamic IP addresses for consumer connections. Also most political parties have become too political and driven away most voters, and they do not like listening to the voters, but rather the rich and powerful.

      1. Anonymous

        Communication security is a fundamental enabling technology for privacy, although I agree that they’re not the same thing.

        All authentication systems eventually collapse to an axiomatic trust decision (likely a consequence of Gödel’s incompleteness theorem), and in the case of PKI that is trust in the CA. I’m going to be catching up with one of the developers of the original SSL standard, and I’ll ask him if he ever perceived an entity trusting 50+ international CAs during the development of the standard. I am fairly sure I’ll get a resounding “no”.

        Even PGP’s web of trust (replacing the hierarchical PKI model) fundamentally moves that axiomatic trust outside the system, by replacing the single CA trust vector with a risk-based web-of-trust where you’re using a risk-based approach to gauge assurance.

        I was not suggesting a return to pre-placed keys. I AM suggesting a fundamental redesign of the browsers and protocol to –

        (a) Highlight key changes in a way which will make end-sites more diligent around key management and lifecycle. For example, if they change CA more than a week before the expiry of the previos cert,, the browser will display this as a very high risk event, increasing the attack cost of someone who has a compromised CA.

        (b) Facilitated by the above, introduce risk-based CA management in browsers. For example, given the voluminous evidence about state-sanctioned industrial espionage from China, I see no rational reason for trusting a CA in China run by a state-owned telco. My trust in certificates signed by US-based organizations is higher, but not significantly so, and the same for an Australian CA. I would place a lot of trust in certain European CA’s. This risk profile should be expressible through the interface, so for example, a site signed by a Chinese CA doesn’t actually list as secure.

        The US-based CA’s will LOATHE this, but international CA’s will love it. There’s another good reason to do it. 🙂

        (c) Enhance SSL to offer multiple certificates, facilitating legal risk mitigation by companies. If I get a cert from a Scandinavian CA, a French CA, a US CA and a Malaysian CA, then I am fairly certain I am secure. The cost of attacking all four would be absurdly high.

        (d) (Maybe) In a similar way to cert revocation, maybe the browser vendors should set up a configurable cert beacon, where the site -> cert presented. If you get presented with a cert different to that which has been seen by 99.9% of the rest of the world, then you have a problem. This could simply be an enhancement to the current malicious site blocking functionality.

        (e) Follow Microsoft’s lead, and block anything below 1024 bits. First world nation states have had the ability to factor 1024 bit RSA keys for at least ten years now. China’s new Thiane-1 has 160TB of RAM, and you only need 128TB for a GNFS attack on 1024 bit RSA. I was surprised no one else notices this.

        There, five possible approaches.

        BTW, I would actually argue the issue with home servers isn’t dynamic IP addressing, but asymmetrical bandwidth.

        I also must not have expressed my final problem statement well. I agree that Facebook has made people devalue their privacy, and that one could argue that government surveillance has transitively benefited. However, I don’t believe that’s the full story. There is a significantly higher acceptance of government surveillance – even illegal surveillance.

        On the other hand, I also need to remember that Australia’s lawful access to communications facilities actually were completely unlawful when first created. A
        conspiracy between the Australian Federal Police, the NSW Police, the Victorian Police and Telecom Australia saw the spending of millions of dollars to put intercept capabilities into the Australian telecommunication system, despite having no legal framework to do so, and a legal prohibition AGAINST doing so. Approximately 500 people were directly involved in that.

        When discovered, a Royal Commission was commissioned chaired by Justice Stewart. He concluded that the actions were illegal, but that the benefits had outweighed the illegality. He recommended no charges against those involved, and proposed a legal framework to authorize it (The Telecommunications Interception Act). So misappropriation of hundreds of millions of dollars and clearly illegal activity of several hundred serving police officers and Telecom Australia employees was ignored.

        Smells a bit, right?

        But it gets worse.

        This was Stewart J’s SECOND Royal Commission. The first Stewart Royal Commission was into the “Mr. Asia” drug syndicate. Those of you who saw the “Underbelly” television show will know that the story of this syndicate was extensively portrayed in one of the episodes. Even the wire intercepts was dramatized.

        Where did most of the evidence for that first commission come from? It came from the very same illegal wiretaps facilitated by this illegal “LEA” infrastructure!

        So yes, the Royal Commission was an exercise in “Yes Minister” style politics. By choosing Stewart J, whose success in the former Royal Commission had done much for his reputation, they pretty much guaranteed an outcome which would be favorable to them. I personally argue Stewart himself was part of that criminal conspiracy, as he could not possibly have known that the material he was being presented with was legally obtained.

        My conclusion here is that maybe this lethargy is the norm, and the 70’s and early 80’s were atypical.

        1. printersMate

          Privacy is controlling the spread of information, and depends on trusting the people to whom you give the information. Communications security can only protect data in transit. File encryption can protect data in storage so long as the key is not stored on the same system as the file.
          SSL is mainly used to protect logins, and data in transit. The problem of many CAs is a problem of their being many companies, this system should not be used to to avoid government surveillance. It is a reasonable system otherwise for protecting communications between customers and companies. It primary weakness is that users do not wish to deal with the details of certificate management. Increasing use involvement in key management largely fails as many users just click carry-on without much thought.
          PGP systems have several uses, allowing published public keys to protect communications to the owner of a published keys, and signature checking against a published key. The trustworthiness is dependant on the trustworthiness of the key publisher, and whether they validate the publisher of a key.
          PGP can also be used between people who exchange the public parts of their key pairs. It is useful for person to person communication, but less so for person to group unless a group public key is created.
          It is also useful for automatic logins in protocols like SSH. It is the ideal way of connecting to home servers.
          Bandwidth is not much of a problem for many used of home servers, as they should be used for low bandwidth high privacy uses. With a fixed IP their is no need to use a third party to enable connections by the few people allowed to use the server.
          The home server is not a full replacement for Facebook etc. but it can be used for private family communications via a network of such servers. Sometimes the dissemination of data will be slower, but speed is often not required. The latest batch of baby pictures can be transferred overnight for example, they should probably not be put up on Flikr or the like.
          Privacy would best be served by keeping ‘public’ and private communications separate. Arrange to meet you mates down the pub on Facebook, but discuss the problems of abn elderly relative on the private network.
          A major problem with politics is that politicians have become largely disconnected from the electorate. People have become used to politicians not listening to them and therefore largely resigned to them passing poor laws. SOPA/PIPA/ACTA were defeated because enough people took action and threatened the re-election chances of politicians if they were passed. The ideas in them will keep coming back until either they are passed, or the politicians are replaced.

    3. A swede

      Decentralized P2P crypto-networks of public key certificates?

      1. If X % of my “friends” say something, then I believe them.
      2. Small world phenomenon.

  6. steelneck

    Of course governments and their (un)security services will flock to the collected data like bees to honey, and if they cannot get what they want when they ask for it, they will use law to get it. It all revolves around the collected data, regardless who the collector is. In Sweden the required laws is probaly allready in place through, for example, the FRA-law that says that every operator who control traffic that runs over the swedish border has an obligation to copy requested traffic to FRA collaboration points. But there is other laws too already in place. Then there can of course be some agreements behind closed doors too, especially regarding big foreign companies. (FRA is a communications intelligence service)

    Just think a bit, of how much of a half secret revolving door there can be, at the big facebook server site that is under construction in Luleå in the north of Sweden. The GOS was engaged in the negotiations with facebook and the US regarding this site. I think most russians on facebook can expect to have their traffic copied in bulk to the FRA and results given straight away to the US, probably illegal to do within the US in this scale. The location in Luleå is also very strategic since the barents area is becoming hotter and hotter in a geopolitical sense.

    So, what can we do if we do not like it? One simple thing that every author can do in a matter of hours, to at least do their little part, is to stop _enable_ the corporate gathering of info about their visitors, through scripts and various objects hosted at corporations linked in on their homepages. Just stop doing it, delete the code. Google or whatever shall not get notified of my IP visiting this site, without my consent, or any other site for that matter.

    For the pirate party this beahviour also risk hitting way to close to home at election time, if some MSM decide to blow up the double standard it is to first complain about it and then actively helping the corporations to gather the information in question. Almost every Swedish blog that belongs to a pirate, have various snippets of code that do report a lot of info about their visitors to third parties, the exceptions are extremely few.

    Put together a group that can educate pirates about this, show how it works, pawals creeper that track government surfing habits is a very good example to use. Show just hos easy it is for corporations to get droves of authors to enable mass surveillance just by offering them some bits of ready processed data regarding their own site, and how extensive it gets by the sheer numbers of people helping them, for free. This is actually a _very_ important task.

    1. Ano Nymous

      Actually, the MSM is going to shut up completely about the Pirate Party, just like they always do.

      We got one “one-question party” (i don’t know what that kind of party is really called in English, but it’s parties that only or mostly have one thing at their agenda) into parliament at the last election – Sverigedemokraterna (The Sweden Democrats), which is a party that opposes the generousity of the immigration policy in Sweden and is often called racist for that – Why did they get in? Because of MSM, of course. Almost every day a few months before the election there was hate-filled articles against the Sweden Democrats in radio, TV and newspapers. But there was not a single word about the Pirate Party. I could bet one years income that if all those hateful articles was about the Pirate Party, they would be where the Sweden Democrats are today – or higher.

      There is no such thing as good and bad publicity, only publicity and no publicity.

  7. William Lee

    I can’t let an article on this topic pass without mentioning what I’m always amazed at how fast it slipped into the public’s memory hole. The Google-CIA partnership.

    They signed a secret agreement around the same time the CIA ‘abandoned’ several of it’s TIA (Total Information Awareness) panopticon projects created to collect as much information as possible about everyone. The CIA is of course notoriously incapable of dealing with large amounts of information. Google on the other hand is renowned for it’s abilities in this field, and people willingly – even eagerly – give them all sorts of juicy details.

    Of course both Google and the CIA assert their ‘right’ to privacy, while both deny that Google is sharing people’s information with the CIA. Information on this is hard to come by, and even the www which sometimes seems to never forget anything, appears to have lost some of the details on this. Try searching for this on Google, see how far you get! 😉


  8. Björn Persson

    Once again decentralization seems to be the solution. If people would use decentralized communication protocols and decentralized search engines instead of centralized services, then there wouldn’t be a central place where the governments could forcibly extract data about everybody. (And of course the protocols shall be encrypted.)

  9. Ano Nymous

    Well, this is approximately what I’ve been trying to tell people for the last three years, but i think youre wrong about who we trust on the different sides of the “pond”. In Sweden, i have only read and heard people complaining about govt. surveillance, and I’ve been trying to tell everyone that Facebook, Google (incl. Android), and a few more are worse, although govt. surveillance is of course also very bad. What I have been thinking and saying since I read about CISPA ( (in Swedish)) is that when combined they are way, way worse than either on it’s own.

  10. Ano Nymous

    Also, Clean IT Project. A law that is being secretly negotiated that is supposed to force companies into monitoring and censoring different types of online content by users. MSM doesn’t say a word.
    Another Flashback thread in Swedish: (sorry all non-swedish speaking people, but Flashback is a good assembly point of information that MSM doesn’t cover)

  11. Corruption

    What about companies and government sharing your data including location and login information? In some country, somewhere, your info will be for sale… If I remember correct, telephone companies share location data with each other to facilitate mobile phone calls. By using corrupt east European phone companies, some middle men were selling peoples locations. If you gave them a phone number in for example Sweden, the east european phone company could extract your current location…

  12. Esduardo Partarroyo

    Why do you publicite this so late? why in 2015 when nothing can be done with this information. Even me realized this years ago, but who would believe me? Is it a coincidence this new whas published in the UK too late?????? Thay are doing with “society” what they want. Believe in Jesus because “Not one stone here will be left on another”.

Comments are closed.