No dia 14 de agosto de 2018, foi sancionada a Lei Geral de Proteção de Dados, que entrará em vigor em fevereiro de 2020, após um período de 18 meses para adaptação. A nova lei introduz mudanças muito significativas, que deverão transformar radicalmente a abordagem da privacidade por parte de indivíduos, empresas e entes públicos. Não se trata apenas de uma questão tecnológica, mas de um desafio que envolverá toda a sociedade.
Um princípio básico refere-se à responsabilização do controlador de dados (a quem competem as decisões referentes ao tratamento de dados pessoais), que deverá ser capaz de demonstrar que o processamento é realizado de acordo com a LGPD, de forma eficaz (prestação de contas). Adota-se uma perspectiva pela qual a privacidade deve ser respeitada desde a concepção dos serviços/produtos, uma a premissa “cultural” para trilhar o caminho da conformidade, abaixo os princípios:
FINALIDADE: o tratamento deve ser realizado para propósitos legítimos, específicos, sem possibilidade de tratamento posterior de forma incompatível com essas finalidades;
ADEQUAÇÃO: o tratamento deve ser compatível com as finalidades informadas ao titular;
NECESSIDADE: o tratamento deve ser limitado ao mínimo necessário para a realização das finalidades;
LIVRE ACESSO: deve ser garantida aos titulares a consulta facilitada e gratuita sobre a forma e a duração do tratamento, bem como o acesso à integralidade dos seus dados;
QUALIDADE DOS DADOS: deve ser garantida a exatidão, clareza, relevância e atualização dos dados;
TRANSPARÊNCIA: deve ser garantida a prestação de informações claras e facilmente acessíveis pelos titulares;
SEGURANÇA: deverão ser adotadas medidas técnicas e administrativas aptas a proteger os dados de acessos não autorizados;
PREVENÇÃO: deverão ser adotadas medidas para prevenir a ocorrência de danos em virtude do tratamento de dados pessoais;
NÃO DISCRIMINAÇÃO: impossibilidade de tratamento para fins discriminatórios;
RESPONSABILIDADE: demonstração de medidas eficazes para observar e comprovar o cumprimento das normas de proteção de dados pessoais.
Curtiu o conteúdo? Siga no perfil agora mesmo! @portalseginfo
The largest collection of breached #data in history has been discovered, comprising more than 770m email addresses and passwords posted to a popular #hacking forum in mid-December.
The 87GB data dump was discovered by the security researcher Troy Hunt, who runs the #HaveIBeenPwned breach-notification service. Hunt, who called the upload Collection #1, said it was probably “made up of many different individual data breaches from literally thousands of different sources”, rather than representing a single hack of a very large service.
But the work to piece together previous breaches has resulted in a huge collection. “In total, there are 1,160,253,228 unique combinations of email addresses and passwords,” Hunt wrote, and “21,222,975 unique passwords”.
While most of the #email addresses have appeared in previous breaches shared among hackers, such as the 360m MySpace accounts hacked in 2008 or the 164m LinkedIn accounts hacked in 2016, the researcher said “there’s somewhere in the order of 140m email addresses in this breach that HIBP has never seen before”. Those email addresses could come from one large unreported data breach, many smaller ones, or a combination of both.
Security experts said the discovery of Collection #1 underscored the need for consumers to use password managers, such as 1Password or LastPass, to store a random, unique password for every service they use. “It is quite a feat not to have had an email address or other personal information breached over the past decade,” said Jake Moore, a cybersecurity expert at ESET UK.
“If you’re one of those people who think it won’t happen to you, then it probably already has. Password-managing applications are now widely accepted and they are much easier to integrate into other platforms than before.
Source: The Guardian
AI could detect security threat no human could see with the visible eyes!
Artificial intelligence “can be of immense importance in detecting things that are almost impossible to detect by manual work,” cybersecurity consultant Amit Meltzer said last week at a conference on AI in Tel Aviv.
The event, “AI for Human Language,” was organized by Basis Technology, a software company that provides AI solutions for the understanding of multilingual and unstructured texts, and hosted 100 to 150 attendees.
The conference focused on how technological innovations brought by AI have transformed natural language understanding (NLU), a branch of artificial intelligence dealing with machine reading comprehension and understanding data when it is in the form of text or speech.
One panel, moderated by CEO Amit Bohensky of Israeli startup Zoomd, dealt with how AI techniques of natural language processing are and will increasingly be of importance to government intelligence agencies.
Meltzer, who worked in the past as CTO in the Israeli Prime Minister’s Office, explained that AI text analysis “improves our ability to monitor” and identify criminal activity. He added that AI’s use of mathematical supervised methods can help trace “indications of hidden activity patterns” that human intelligence analysts would not be able to extract from a huge volume of data. “AI for the foreseeable future is not substituting for people, but helping them,” he said.
Of all the threats artificial intelligence poses to humanity, we unsurprisingly focus on the most dramatic. We fear that AI and automation will steal jobs, will make authentic human relationships redundant, and will evenenslave or destroy the entire human race. However, one danger is already here, and while it may not be quite as dystopian as those above, it’s no less disconcerting: the arrival of malicious AI chatbots. Thanks to advances in machine learning, chatbots can now be exploited by hackers in order to deceive unsuspecting victims into clicking dubious links and handing over sensitive information.
For the most part, malicious chatbots will look and act just like the regular chatbots you find on websites: They’ll appear in a small pop-up window in the corner of a webpage, will ask visitors and customers if they need any help, and will use machine learning and natural language processing to respond intelligently to comments. But rather than provide genuinely useful information or assistance, they’ll ask people for their personal info and data, which will ultimately be used by hackers for less-than-honorable ends.
Such “bad chatbots” have received scant attention up until now, but that’s likely to change in 2019. Publishing its cybersecurity predictions for the coming year, the Seattle-based security firm WatchGuard has put malicious chatbots first on its list of security threats (above ransomware, “fileless” malware, and a possible “fire sale” attack). At a time when chatbots are on track to be used by 80 percent of companies by 2020, such a threat is very real, if only because the usually innocuous chatbot is becoming disarmingly familiar. “In many ways, AI-based chatbots are the next frontier in computational development,” explains Yaniv Altshuler, a researcher at MIT’s Media Lab and the CEO/co-founder of Endor, an Israel-based AI company that’s building an automated “prediction engine” for businesses. “Just like the point-and-click interface changed the way people interacted with their technology, AI-based chatbots are making our computing experience more personal and lifelike.” #hacking#cyberpunk#cybergoth#cyber#cybersecurity