Facebook says it is developing what appears like a fairly fine-grained way of monitoring sounds occurring in enclosed spaces – spaces that would also be mapped for any objects they contain. And Facebook is relying on artificial intelligence (AI) of one kind or another to achieve this, said a blog post announcing “AI Habitat.”
As is often the case, this type of “arrangement” between ordinary users and a data-dependent and hungry tech juggernaut like Facebook, and the trade-off involved, doesn’t seem to be really worth it for users would have to wear “smart glasses” to allow this invasive system into their homes, and in exchange get trivial help from their AI assistants in checking whether the front door’s locked and finding a phone that’s ringing somewhere else in the apartment. If that’s all the benefit to the user wearing a pair of awkward “glasses” loaded with sensors probing the privacy of their home to the core – it truly sounds like the worst deal ever.
A must watch. David Icke has researched world events and plans (particularly of the 1%) for thirty years. As he points out, what is now materializing right in plainer sight is a plan that is not hidden at all. They have been discussing it for a long time. For the awake it is about AI, artificial intelligence. They literally want us connected to their tech … it is really about complete control of all you do in particular your thoughts. Controlled thus far by perception via control of media, they wish to enter your heads literally. The masks are helping you to lose your identity. This is now right upon us. EWR
95.2K subscribers Welcome everyone, thanks for tuning in and congratulations! If you are reading or watching this, that means you have officially survived the first half of 2020. Something tells me, the second half will be just as crazy, if not more crazy than the first half was. In this report we will be taking a glimpse of what the not too distant future may look like. Yes some of this will be speculation, but it is speculation projecting forward based on the facts we have today. To be clear, the road humanity is being led down does not look very human at all according to the social engineers AKA technocrats who are deciding and dictating what the future of humanity looks like for us, we have no say according to the elite. Watch this report and decide for yourself, will humanity benefit from this projected future? or will this digitalized system of control be the final nail in the coffin of free will and expression of individuality. IN MICHIGAN, HOUSE PASSES BILL TO ‘VOLUNTARILY’ BEGIN PLACING HUMAN IMPLANTABLE MICROCHIPS INTO THE BODIES OF ALL STATE GOVERNMENT EMPLOYEES https://www.nowtheendbegins.com/michi… World Economic Forum’s 4th Industrial Revolution https://www.youtube.com/watch?v=kpW9J… Mass-Tracking COVI-PASS Immunity Passports To Be Rolled Out In 15 Countries https://www.zerohedge.com/political/m… Digital Tattoo https://whatis.techtarget.com/definit… An Invisible Quantum Dot ‘Tattoo’ Could Be Used to ID Vaccinated Kids https://www.sciencealert.com/an-invis… Human Mind Control of Rat Cyborg’s Continuous Locomotion with Wireless Brain-to-Brain Interface https://www.nature.com/articles/s4159… Michigan Makes Worker Microchips Voluntary … Wait, What? https://www.popularmechanics.com/tech… Bill Gates Calls for a “Digital Certificate” to Identify Who Received COVID-19 Vaccine https://www.newsbreak.com/news/0OdBn0… CIA Mind Control https://www.cia.gov/library/readingro… Forgot To Include RFID Vaccine Needles https://www.dcvmn.org/IMG/pdf/2019_ro…
As AI gets on its legs, along with the new ‘social distancing’ the folk we interact with become less and less personal. Ella, the new Electronic Life Like Assistant, is our latest AI possibility for obtaining info from the Police; Police Connect it’s called. We’ve come a long way haven’t we from the constable on a bicycle who actually cared and had powers of human reasoning. I doubt this Ella will. If we ‘advance’ to China’s social credit system (ie less freedom if you cross the acceptable lines …remember it’s been observed China’s the prototype for the new Orwellian totalitarian world government) Ella is all we’ll get to talk to regarding the rules and regs I’d say. EWR
Read more about Ella at the link.
Police conducted a trial of controversial facial recognition software without consulting their own bosses or the Privacy Commissioner.
The American firm Clearview AI’s system, which is used by hundreds of police departments in the United States and several other countries, is effectively a search engine for faces – billing itself as a crime-fighting tool to identify perpetrators and victims.
New Zealand Police first contacted the firm in January, and later set up a trial of the software, according to documents RNZ obtained under the Official Information Act. However, the high tech crime unit handling the technology appears to have not sought the necessary clearance before using it.
Privacy Commissioner John Edwards, who was not aware police had trialled Clearview Al when RNZ contacted him, said he would expect to be briefed on it before a trial was underway. He said Police Commissioner Andrew Coster told him he was also unaware of the trial.
Photo: Radio NZ
New Zealand is a leader in government use of artificial intelligence (AI). It is part of a global network of countries that use predictive algorithms in government decision making, for anything from the optimal scheduling of public hospital beds to whether an offender should be released from prison, based on their likelihood of reoffending, or the efficient processing of simple insurance claims.
But the official use of AI algorithms in government has been in the spotlight in recent years. On the plus side, AI can enhance the accuracy, efficiency and fairness of day-to-day decision making. But concerns have also been expressed regarding transparency, meaningful human control, data protection and bias.
In a report released today, we recommend New Zealand establish a new independent regulator to monitor and address the risks associated with these digital technologies.
AI and transparency
There are three important issues regarding transparency.
One relates to the inspectability of algorithms. Some aspects of New Zealand government practice are reassuring. Unlike some countries that use commercial AI products, New Zealand has tended to build government AI tools in-house. This means that we know how the tools work.
But intelligibility is another issue. Knowing how an AI system works doesn’t guarantee the decisions it reaches will be understood by the people affected. The best performing AI systems are often extremely complex.
To make explanations intelligible, additional technology is required. A decision-making system can be supplemented with an “explanation system”. These are additional algorithms “bolted on” to the main algorithm we seek to understand. Their job is to construct simpler models of how the underlying algorithms work – simple enough to be understandable to people. We believe explanation systems will be increasingly important as AI technology advances.
A final type of transparency relates to public access to information about the AI systems used in government. The public should know what AI systems their government uses as well as how well they perform. Systems should be regularly evaluated and summary results made available to the public in a systematic format.
New Zealand’s law and transparency
Our report takes a detailed look at how well New Zealand law currently handles these transparency issues.
New Zealand doesn’t have laws specifically tailored towards algorithms, but some are relevant in this context. For instance, New Zealand’s Official Information Act (OIA) provides a right to reasons for decisions by official agencies, and this is likely to apply to algorithmic decisions just as much as human ones. This is in notable contrast to Australia, which doesn’t impose a general duty on public officials to provide reasons for their decisions.
But even the OIA would come up short where decisions are made or supported by opaque decision systems. That is why we recommend that predictive algorithms used by government, whether developed commercially or in-house, must feature in a public register, must be publicly inspectable, and (if necessary) must be supplemented with explanation systems.
Human control and data protection
Another issue relates to human control. Some of the concerns around algorithmic decision-making are best addressed by making sure there is a “human in the loop,” with a human having final sign off on any important decision. However, we don’t think this is likely to be an adequate solution in the most important cases.
A persistent theme of research in industrial psychology is that humans become overly trusting and uncritical of automated systems, especially when those systems are reliable most of the time. Just adding a human “in the loop” will not always produce better outcomes. Indeed in certain contexts, human collaboration will offer false reassurance, rendering AI-assisted decisions less accurate.
With respect to data protection, we flag the problem of “inferred data”. This is data inferred about people rather than supplied by them directly (just as when Amazon infers that you might like a certain book on the basis of books it knows you have purchased). Among other recommendations, our report calls for New Zealand to consider the legal status of inferred data, and whether it should be treated the same way as primary data.
Bias and discrimination
A final area of concern is bias. Computer systems might look unbiased, but if they are relying on “dirty data” from previous decisions, they could have the effect of “baking in” discriminatory assumptions and practices. New Zealand’s anti-discrimination laws are likely to apply to algorithmic decisions, but making sure discrimination doesn’t creep back in will require ongoing monitoring.
The report also notes that while “individual rights” — for example, against discrimination — are important, we can’t entirely rely on them to guard against all of these risks. For one thing, affected people will often be those with the least economic or political power. So while they may have the “right” not to be discriminated against, it will be cold comfort to them if they have no way of enforcing it.
There is also the danger that they won’t be able to see the whole picture, to know whether an algorithm’s decisions are affecting different sections of the community differently. To enable a broader discussion about bias, public evaluation of AI tools should arguably include results for specific sub-populations, as well as for the whole population.
A new independent body will be essential if New Zealand wants to harness the benefits of algorithmic tools while avoiding or minimising their risks to the public.
Alistair Knott, James Maclaurin and Joy Liddicoat, collaborators on the AI and Law in New Zealand project, have contributed to the writing of this piece.
They are saying in the video, AI is a risk, and yet they are not putting stops on it. Pandora’s box. Informative video.
Published on Nov 19, 2018
Surveillance, the chip, bots, Darpa, Lockheed Martin, cyborg soldiers, artificial intelligence the future. Interesting watch.
Published on Oct 30, 2017
Has Alexa, Amazons AI home assistant, inadvertently blown the whistle on state authorities?
Chemtrails “left by aircraft are actually chemical or biological agents deliberately sprayed at high altitudes for a purpose undisclosed to the general public by government officials.” Quote Alexa AI
Amazon, having acknowledged that it was a program error, has hurried to make corrections to Alexa’s program. Now replying to the same question on chemtrails, the digital assistant hits back:
“Chemtrails refer to trails of condensation, or contrails, left by jet engine exhaust when they come into contact with cold air at high altitudes.”
Here is the original: (different video to that originally posted here, that account is gone now, I will leave their notes below however FYI).
Published on Apr 7, 2018