AXEL Network Products:

AXEL GO - share and store files securely.

LetMeSee - photo sharing app.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

AXEL.org

  • Network
  • Technology
  • Applications
  • Blog
  • About
    • Team
    • Press
    • Careers
    • Patents
  • Contact Us
  • Login
    • AXEL Go
    • AXEL Cloud

Privacy

August 14, 2018

6 textbook examples of how NOT to respond to a Data Breach (Seriously guys?)

Yahoo: Do nothing and pray it goes away

Why are we surprised at this?! When Yahoo suffered a breach in 2013, it decided to just keep quiet about the 3 billion accounts that were compromised. Surely this would prove to be an effective strategy?

LOL.

The news broke a whole FOUR years later, in 2017, that 3 billion accounts had been hacked, which is more than the company claimed in 2016, which is the first time anyone heard anything about a data breach. We shouldn’t really be surprised, as “do nothing and pray it goes away” has been Yahoo’s MO for quite some time now.

FriendFinder Networks: Take days to respond and then downplay the incident in a vague press release

FriendFinder Networks is a company that you’d reeeally want to keep your data secure. It operates AdultFriendFinder, a “sex and swinger community,” and when it suffered a breach in 2016, the response was slow and the press release was tepid. The company affirmed that it “encourages users to change their passwords,” and appeared to put most of the onus on the users, commenting that it would contact users “to provide them with information and guidance on how they can protect themselves.” Seriously?

This press release came after days of speculation, which is actually forever if you are a user of an adult website waiting to find out if your data has been made public.

Equifax: Fail to patch software, take forever to disclose breach, let execs sell their shares

Equifax has one of the shadiest timelines of this group, and competition was stiff here!! After failing to patch a known vulnerability in March 2017 in widely used open source software Apache Struts, the data of 143 million US customers was potentially exposed in May 2017. Then on July 29th, days after the breach was discovered, executives sold off nearly $1.8M worth of Equifax shares. Hmm….this looks bad, but maybe there’s something we don’t know here. (Read: there’s not. It’s bad.)

Ticketmaster: Pretend it’s not happening

Ticketmaster was alerted to a possible breach in April of 2018, but decided to do its best impression of an ostrich and just pretend it wasn’t happening until it received apparently irrefutable (or un-buryable) evidence on June 23rd. Online bank Monzo released a statement shortly afterward saying it spotted the breach in April, but Ticketmaster said nah after an internal investigation revealed no evidence of any such breach.

I’m confused. Are we just letting companies investigate themselves now? This is not how any of this should work. Anywho….

Facebook: Deny deny deny

Facebook didn’t suffer a breach. Instead, it voluntarily gave away a treasure trove of user data and then informed us that we had all agreed to it in the terms and conditions. Whoops – we should have read those, but they’re just so boring, and no one can recall seeing a line item that said “we will give away all your data, suckers, and there’s nothing you can do about it LOL.” I think I would have remembered that…..

To its credit, Facebook did admit that its data had been “improperly shared,” but didn’t go so far as to call it a breach. They didn’t go so far as to call us suckers either, but that doesn’t mean it isn’t true.

Exactis: Leave us all in suspense as if our data’s safety was a plot point in a Mission Impossible movie

None of this is entertaining, you guys. Apparently there is a “database with pretty much every US citizen in it” floating around the internet, according to security experts. That seems pretty bad.

But even worse, the company associated with the breach has stayed silent for days, which is deeply bumming out 230 million of us who would kindly like to know if our personal information is available online.

The bottom line

Data breaches are inevitable. Attackers are targeting companies on a daily basis. But ignoring the fact that a data breach has occurred, failing to patch a known vulnerability, putting the onus of dealing with a breach on users, and – most obviously of all – selling off your stock when you have insider information of a breach doesn’t help anyone. Companies need to be honest when they think a breach has occurred, or they risk losing their customers’ trust. And as our data multiplied exponentially, trust is becoming scarce.

Filed Under: Cybersecurity Tagged With: Breach, cybersecurity, data breach, equifax, facebook, online privacy, Privacy, Security, ticketmaster

July 26, 2018

Why Your Camera Isn’t As Safe As You Think

You’ve joked about it before. How some lonely CIA agent is secretly watching (and perhaps salivating at) your every move via your webcam. So, after you get out of the shower you open your laptop, strike a pose, and chuckle to yourself because you know the very idea is both hilarious and preposterous…until you realize it isn’t.

Webcam spying is very real.

Sure, you’ve seen articles and news segments about people who’ve fallen victim to spying via their webcams. But that’s because they’re either incredibly careless or doing some illegal stuff they know they shouldn’t. Right? Not exactly.

It’s well within the realm of possibility that you’ll wake up tomorrow morning to see pictures and/or videos of yourself in some sort of “compromising position” online. Yes, I said pictures of you. Plain old, beer-drinking, hangover-having you. But of course, you probably won’t see those pics until they’ve been liked, retweeted, and shared with a million other people first.

So in case you missed it my friend, welcome to the 21st century.

Devices That May Be Hacked

In most cases, spying is done through the cameras of desktop or laptop computers. So if you’re thinking of taking that extra five bucks out of your dad’s wallet, don’t assume you’re all alone. A hacked camera can cause severe emotional or psychological damage. One 20-year-old Glasgow student was left traumatized after she found out webcam hackers watched her while she was in the bath.

Although computer webcams are the devices that are most commonly hacked, you can also be tracked and watched via your smartphone camera. Even surveillance systems may be hacked and used to track people in real time. This means unscrupulous individuals may be able to watch you at home or at work from multiple angles, all day, every day.

Even more frightening is the idea that your children may be targeted. Imagine the horror of a mother in Houston who found out that footage of her daughters’ bedroom was being live-streamed. And if you thought things couldn’t get any weirder than that, consider the fact that even baby monitors are being used for spying and the data of more than 2.5 million kids was stolen using their favorite smart toy.

How It’s Done

The most common way hackers access your cameras is by using malware. Seemingly innocent links or attachments embedded in emails and online ads may be riddled with Trojan horses. Be sure to avoid the sweet Russian girls; a simple click or download could leave your device infected—effectively handing over control on a silver platter. Oftentimes, the malicious code is packaged with legitimate programs or software so you don’t even notice it. Hackers with remote access can turn your cameras on and off with no indication from the camera light.

Another way people may gain unpermitted access is by borrowing your device and manually downloading applications that allow them to access your files, camera, and microphone. These applications can be hidden so you don’t even know they are at work.

And if you thought that was all, my friend, you’d be wrong. Ever thought about app permissions? I’m sure you’re familiar with apps asking for permission to use your camera. What you may not be familiar with is the fact that these apps can capture you on camera at any time when they’re in the foreground (yes, that means even when you’re not using the camera). What’s worse is that no one knows what these apps may be able to access when they’re in the background and out of sight.

Unmasking The Creeps Who Spy On You

The main perpetrators in the spying pandemic are hackers. They use Trojans to claim control of your cameras and watch your every move. They may put your photos and videos on the internet for others to view online. In more disturbing cases, nude and intimate moments may be live-streamed on voyeuristic websites.

As I mentioned before, apps can also gain unpermitted access to both your front and rear cameras. Who knows what WhatsApp, Instagram, Snapchat and the like are capturing when your cameras are off and what they’re doing with it? Are they selling footage of you? Maybe. To whom or for what purpose I don’t know, but Snapchat may need the money.

Now, we’ll discuss the attackers you already know about: the government. Did you know that built-in backdoors in your smartphone may allow the government to access your files, read your messages, listen to or record your calls, capture images, and stream video? Just in case you ever thought the government was on your side!

In 2013, Edward Snowden revealed that GCHQ—a British surveillance agency—collected and stored images from the video chats of millions of Yahoo users under the Optic Nerve program. Yes, tons of raunchy pics were collected and stored as well for…uh…security reasons.

But the blatant disregard for your online privacy doesn’t stop there. In fact, your school and the people you know best may be the biggest culprits.

Between 2009 and 2010, a number of Pennsylvania schools were caught remotely accessing the cameras of laptops issued to their students. And as for your “friends,” they can simply install spy software on your device without you having the slightest clue. Just ask pageant girl Cassidy Wolf. She learned the hard way when she was blackmailed with nude photographs her former classmate had taken via her webcam.

Why Cyber Spying Is Wrong

This one is obvious. We all have the right to data privacy. What’s that, you ask? It’s the ability for an individual (or organization) to determine if and how personal data will be shared with third parties. This includes access to the cameras on your laptop, smartphone, and surveillance system. Your data, your choice.

But is it even your data?

That’s a pretty important question. The terms and conditions you’re so quick to agree to (but never really read) may disagree. Are you unknowingly giving apps permission to access your cameras even when you aren’t using them? Maybe. Is this approach grossly unethical and utterly misleading? Yes. Is it illegal? Perhaps not.

What You Can Do About It

If you’re fine with people spying on you, you might as well stop reading right now. If you’d like a few tips on how to deal with the issue, consider those listed here:

  1. Cover your webcam with tape. If you have an external webcam, be sure to unplug it when you’re not using it.
  2. Install anti-virus software on your PC and your smartphone. It will readily spot and block malware. Be sure to keep your firewall enabled as well.
  3. Use protection. No, not that type of protection. Place a secure lock on your phone. Use a fingerprint lock or password to keep nosy “friends” away.
  4. Use your devices on secure networks. Stay away from public networks.
  5. Think carefully before giving an app permission to access your camera.
  6. Update the password for your surveillance system regularly.
  7. Be cautious about the emails that you open and the links or attachments inside them.
  8. Be wary of online advertisements and dodgy chat rooms.

And because I love you, here are some other ways you can be safe online.

The Bottom Line

People are definitely being spied on with their own cameras. You may be one of them. The government, hackers, schools, apps, and people you know may not be as innocent as they seem. While organizations like ours try to bring light to this gross disregard for your right to data privacy, remember to do what you can to keep yourself protected. And for Pete’s sake, never, EVER, trust that shady middle-aged guy who always sits behind you in the coffee shop!

Filed Under: Cybersecurity Tagged With: camera, cybersecurity, data privacy, online privacy, Privacy, Security, spying, webcam

July 19, 2018

Protect Data Privacy by NOT Collecting Data at All

In Hansel and Gretel, the two siblings sprinkle breadcrumbs as they venture into the woods in order to find their way home.

When we browse the internet, we sprinkle metaphorical breadcrumbs of information about ourselves as we go. Unlike the fairytale, where Hansel and Gretel knew what they were doing, the vast majority of internet users are unaware of just how much information they’re giving away on their journey around the web.

Unless you’ve got blockers installed up to your ears, the tracking starts as soon as you open up an internet browser. From that moment, your digital footprints carve a route around the web that can be traced back to you at any moment.

Sites you visit can use these footprints (or breadcrumbs, if we’re sticking with the fairytale theme) to recognize who you are and serve you a more personalized experience.

That sounds great, right?

In one study, 71% of consumers said they’d prefer a personalized experience with ads, while some even expected it from brands. And the easiest way for sites to personalize those experiences is to track the interests and online behaviors of visitors.

From that perspective it works; the consumer gets a personalized experience and brands get to give their customers what they want. It’s a win-win situation.

But is it really that simple?

I mean, we’re not talking epic government data mining expeditions here; we’re simply talking about brands using specific information to better target content to their users. It’s all above board and totally legal.

So what kind of data can these companies get from you?

It can be anything from your current location and the device you’re using to specific links you’re clicking on and the actions you take on certain sites. It all starts with your browser and your IP address – the moment you pop up online, a unique number that identifies the device you’re using is recorded, marking the moment you entered the internet and where you were when you went online.

At the same time, your browser is logged as well as other uniquely identifying information like the system you’re running the browser on, the display resolution, and even the battery level of your device. Even if you haven’t clicked your mouse or typed anything in yet, you’re already being tracked.

Who Benefits from Collecting Data?

I mentioned earlier that data collection can be mutually beneficial. Consumers don’t have to see ads that they’d never buy from in a million years, while websites can get more information on their visitors to make experiences more personalized and, therefore, get more sales.

But who is it really more beneficial for? If we really get down to the bottom of it, who is really getting the most out of the dissemination of data?

Personalized experiences are nice, right? But are they worth the data breaches that happen and the inevitability that brands will sell that data to completely unrelated companies just to make a quick buck?

Let’s face it: most sites are eager to scrape as much information as they can about their visitors with the sole purpose of making more money. Sure, the thought process might be there to make experiences more enjoyable by personalizing them, but really the goal here is to target more.

Look at Facebook. The data it collects as you browse the site can determine when you’re expecting your firstborn, the exact names and addresses of the companies you’ve worked for in the past, and even your political leaning.

And guess what?

It doesn’t just collect this data to get to know you better as if you’re on some kind of weird, digital first date. It collects it to sell to companies to make money through advertising.

So yes, there are benefits to the consumer; you might not have to pick a particular city every time you want to get the weather because it’s remembered your past choices, or you might not have to shop again for those items you left in your online basket last week, but these benefits are minor compared to the massive benefits companies and sites get from tracking your every move.

Where the Lines Get Hazy…

Of course there are browser security protocols in place that mean sites can’t just go around scraping all sorts of stuff about you. In fact, for the most part, sites can only access the data they’ve collected – as in, they can only see the information you’ve “given” them while you’ve been on their site.

However, something called third-party cookies muddy the waters. These aren’t associated with any particular site, but instead get spread across a number of different pages in, say, an ad network.

Princeton University ran a study that found cross-site trackers embedded in 482 of the top 50,000 sites on the web. It might not seem like a lot in the great scheme of things, but once these third-party trackers have consumer information they can then sell it to even more people.

While the most sensitive data is redacted from these apps, consumers are still having to put their trust into a nameless, faceless brand.

But what about the data that consumers are handing over willingly?

Things like Google searches and checking into venues on Facebook?

While sites might be collecting information like which browser you’re using and what your shopping preferences are, you’ve probably handed over more sensitive information like your birth date and exact location without even giving it a second thought.

Does the Future Lie in NO Data Collection?

In May this year, the GDPR (General Data Protection Regulation) came into play in Europe. It means that brands now have to explicitly state to their users exactly what information they are collecting and exactly what they will be doing with it.

Users now have to actively opt-in to providing their information; sites can’t just take it for nothing. Already countries outside of Europe are considering this new method because, well, it just seems like the right thing to do.

But what does it mean for the future of data collection?

Now that users are more aware of their rights when it comes to data collection and have to actively “opt-in” with their information, they are becoming less and less inclined to do so.

If there’s an option to not sell your firstborn, it’s kind of a given that you’re going to go for that, right?

In this instance, the future of data collection looks bleak – especially for sites and brands. If their users aren’t giving up the goods, they’ve got nothing to work with and essentially have to go back to the drawing board.

This might invite new ways of collecting data or a more collaborative approach between consumers and brands so that information can travel between the two in an open and honest way.

The future of data privacy is uncertain for now, especially so soon after GDPR has risen its head. What we do know is that the power will be distributed more evenly between internet users and brands, and sites will no longer be able to take, take, take without building more of a relationship with their visitors.

It sounds quite nice, actually.

But would a world without any data tracking or collection be good? If every person who went online immediately went incognito, leaving not a single trace of who they are or what they’re doing, how would the digital world evolve? How would companies know what their consumers want? How would internet users cope with having to start from scratch every time they went back online?

The questions remain endless, but it’ll be interesting to see which path data collection goes down from here on out.

Filed Under: Cybersecurity Tagged With: cookies, cybersecurity, data privacy, data protection, infosec, Privacy, Security, tracking

July 3, 2018

California Thinks It’s Fixing Data Privacy. It’s Not.

“Your move,” says the new California Consumer Privacy Act of 2018.

Except, this isn’t a game of chess—picture it more like a million-piece jigsaw puzzle called “Cats Around the World,” and it’s been spread out on your dining room table for the past twenty years and you’re only 40 pieces in.

(Sounds like a party, am I right?)

Here’s the thing: the data privacy law that was signed on Thursday by California’s Gov. Jerry Brown is a new piece of the data privacy jigsaw puzzle that has served as the U.S.’s means to protect its citizens’ privacy. It’s certainly a huge step in terms of improved privacy laws, but it’s not quite clear how it fits into the nation’s “big picture.”

So far, the U.S.’s privacy law game is patchwork and somewhat messy. We have federal laws like The Federal Trade Commission Act (FTC Act), the Health Insurance Portability and Accountability Act (HIPAA), and the Children’s Online Privacy Protection Act (COPPA), which are aimed at specific sectors, and we also have state statutes that are aimed at the rights of individual consumers. However, there is no single principal data protection legislation, which means the currently enacted laws don’t always work together cohesively.

And this adds to one big, confusing jigsaw puzzle with pieces that sometimes overlap and contradict one another.  

Up until now the timeline of such regulations have been slow and piecework. Most of our states are weak in terms of their data protection, with a few states—Florida and Massachusetts, for example—serving as “leaders” in data privacy regulations.

Already this year we’ve seen the EU’s General Data Protection Regulation (GDPR) going into effect, and we’ve also seen (way too many) data breaches in the states. The issue of data privacy is gaining notice throughout our nation and throughout the rest of the world, and now some of us are wondering: what does the future hold in terms of data privacy in the U.S.?

California’s sweeping law seems to be a good step in the right direction, but how does it fit into the rest of the puzzle?

An “Interesting” Piece, To Say The Least

California’s new privacy law will give consumers more control over their data and force data-holding companies to become more accountable and transparent.  The Act establishes the right of California residents to know what personal information about them is being collected and to whom it is being sold, plus the ability to access that information and delete it. Additionally, the Act will establish an opt-in consent for individuals under the age of 16.

It’s coming into effect in the wake of the new EU law that was enforced in May, and although it isn’t as extensive as the GDPR, it’s certainly proving to be a forerunner of U.S. privacy rights. 

However, the Act also had an interesting path—surprisingly, it didn’t face much opposition from major companies despite its fleshed out regulations.

Why not?

Because there was also a ballot measure—the California Consumer Personal Information Disclosure and Sale Initiative—that had been cleared for a vote in California in the fall, which would have proved to be an even greater challenge for companies due to its tighter restrictions and higher fines.

Major companies—like Facebook, Verizon, Uber, and Google, among others—were already lining up against the ballot, and some donated to the Committee to Protect California Jobs in a further effort to oppose it.

Leaders of the Committee to Protect California Jobs said in a statement, “This ballot measure disconnects California. It is unworkable, requiring the Internet and businesses in California to operate differently than the rest of the world…”

In the end, even though enough signatures were collected for the initiative to appear on the ballot, a compromise was reached instead. This resulted in the proponents withdrawing the initiative and the newly approved Consumer Privacy Act entering the world.

So, to sum up the story, the end result basically came about from many of the voters having to choose between “I don’t like this” or “I really don’t like this.”

…Which kind of sounds like the debate you’d have while shopping for the top two hardest bingo games at the store because it’s your great aunt’s birthday and she wants to party.

The “Puzzle” Thus Far: A Quick Data Privacy Timeline

The California Consumer Privacy Act arrives as a new and shiny addition to a slow and dusty timeline of U.S. privacy regulations.

Let’s take a quick peek at a timeline of some of our nation’s data protection laws:

1974 – Family Educational Rights and Privacy Act: restricts disclosure of educational records

1978 – The Right to Financial Privacy Act: restricts disclosure to the government of financial records of banks and financial institutions

1986 – Computer Fraud and Abuse Act: prohibits unauthorized access to obtaining financial information, causing damage, obtaining something of value, or affecting medical records

1986 – Electronic Communications Privacy Act: protects electronic communications during production, transit, and storage, and applies to email, telephone conversations, and data stored electronically

1988 – Video Privacy Protection Act: prohibits videotape sale and rental companies from disclosing data

1994 – Driver’s Privacy Protection Act: restricts states from disclosing state drivers’ license and motor vehicle records

2000 – The Children’s Online Privacy Protection Act: restricts collection of data from children under the age of 13

2003 – Health Insurance Portability and Accountability Act: protects and establishes standards for the electronic exchange and security of health information

Because the U.S. takes a sectoral approach to regulating privacy, many of the current regulations overlap in some areas while providing gaps in other areas.

For example, the Family Educational Rights and Privacy Act (FERPA) generally covers data like student immunization and medical records, but it sometimes conflicts with COPPA, which only protects data for children under the age of 13.

With ever-growing sources of sensitive and valuable data, and the increasing risk of that data being mishandled and exposed, a need for solid privacy regulations is bigger than ever.

But with a sectoral approach to regulations, the result is that maintaining standards of data privacy becomes a confusing and complicated task.

The Big Picture (Hopefully Not Of Cats)

There was a time when the sectoral approach was deemed by many U.S. organizations to be preferable to a more overarching approach like the GDPR: industries could establish a more “individualized” way of regulation that suited their needs, and the hodgepodge of regulations sometimes created gaps that organizations could fall into.

However, now the gaps are smaller and the replacing overlaps make it significantly more difficult and complicated for organizations to appropriately handle their data. The U.S. is still an outlier in its privacy approach, but now it’s starting to get a really bad rap across the globe.

The new California Consumer Privacy Act of 2018 is one more piece to add to the immense jigsaw puzzle that makes up the U.S.’s approach to privacy laws, but it begs important questions: how well will it fit in with already existing regulations, and how much of an influence will it have in future regulations being established?

Ideally, the nation’s future of data privacy laws will be cohesive, clean, and fit together well in a way that thoroughly protects citizens’ data and is adaptable to numerous industries.

California has made a big step towards the future of data privacy—here’s to hoping that only good things will follow.

Filed Under: Cybersecurity Tagged With: act, california, california consumer privacy act, data mining, data privacy, law, legislation, Privacy, Security, statute

June 20, 2018

How Virtual Reality Is Being Used To Put An End To Cyber Attacks

**This is part of our series highlighting startups who share our mission of trying to make people’s lives just a little easier**


The explosion of new technologies has seen a huge rise in the quantity and – more importantly – the quality of cyber hackers out there. Crude attempts to hack into systems are a thing of the past, and instead expert attackers are collaborating with governments and crime syndicates to do questionable things with data.

For digital businesses in particular, this is a big concern. Large, distributed networks that are scattered around the web lend themselves perfectly to cyber-attacks from sophisticated hackers, and those hackers are more savvy than ever before.

New Israel-based startup Illusive Networks was built to stop these attackers in their tracks – literally (albeit digitally).

Malicious hackers will find every entry point they can to wriggle into a network, often bypassing firewalls that companies thought would protect them and their assets. Because of this, Illusive Networks has said goodbye to firewalls and has instead gone for a different method of creating a new world for the hacker to disappear into (and get lost).

If it sounds like something out of Minority Report, you might be onto something. And, if it sounds a bit farfetched, you’re on the same wavelength as us. I mean, creating a whole new world simply to distract potential hackers seems like a lot of extra effort, right?

This is where it gets interesting.

You’ve heard of virtual and augmented reality, right? These are two new technologies that layer an alternate reality over the top of, well, real reality to bring participants new perspectives and new worlds entirely.

Illusive Networks taps into these technologies and creates a false version of a company’s network to either trap the hackers in an alternate “reality” or kick them out completely.

Isn’t Illusive Networks Just Like the Others?

The answer to this question is, of course, yes and no.

Businesses have access to thousands of different security products these days, and there seems to be a new anti-cyberattack startup popping up every single day.

Because of this, business owners and security leaders are resisting adding even more tools to their security arsenal – the last thing people want or need are noisy alerts every time a hacker tries to break through a digital barrier.

“But technologies that truly look at existing problems in new ways and are purpose-built to help companies deal with the unexpected can deliver significant efficiencies that reduce rather than add to the security burden,” says Illusive Networks’ Founder and CEO, Ofer Israeli. “Distributed deception technology is certainly one of them.”

How Illusive Networks Works

On its website, Illusive Networks says that it:

  • Maps potential paths attackers can take to get to the goods (a.k.a. your most important assets)
  • Finds and gets rid of risky areas that help attackers reach your assets
  • Cloaks your system with thousands of high-fidelity deceptions that trigger an alert when one wrong move is detected
  • Offers real-time forensic reports to help response teams stay in control

But what do all these things really mean? And what even is “distributed deception technology”?

“There will always be a phishing or drive-by attack,” says Israeli. “Humans are the weakest link and always will be and will continue to make mistakes. But once the hacker is in, now we have an attacker who needs to orient himself.”

Essentially, distributed deception means creating a series of fake journeys a potential hacker could take. The aim is to confuse, deceive, and catch them red handed.

Illusive Networks creates an “illusive” version of a company’s network (that alternate reality we were talking about earlier). And, once a hacker finds themselves in this parallel universe, the tool identifies the individual and either keeps them shut in there forever or kicks them out for good.

Think about it: to strategically plan a pathway to the main asset, a hacker needs to consider two things. They need to know what options they have for where they can go next, and they need to know how they can access the powers needed to execute that particular move. In the security world, this two-step process is known as orientation and propagation.

You see, to get to the coveted prize, a hacker needs to make a series of hundreds or thousands of tiny moves – something that Illusive Network aims to put a rapid stop to.

Say, for example, there’s a hacker who has the option to take three different paths towards their next step. Illusive Networks then swoops in with a further twenty choices, of which only three are real and the other seventeen are traps. If the hacker takes any of those seventeen options which, let’s face it, is highly likely with the law of probability, the system is alerted to an unwanted intruder.

Likewise, if a hacker needs to gain credentials to make their next move, Illusive Networks will supply them with tens more credentials than they need so that, again, if they pick the wrong choice the system goes into lockdown.

So, rather than shutting out hackers entirely like firewalls do, Illusive Networks deceives them so it’s almost impossible for them to reach their end goal. The startup has even brought several ex-attackers on board who have shared their perspectives to make solutions more realistic and useful.

Perhaps the most advanced thing about the startup is that neither the professionals working for Illusive Networks nor the hackers can see the deceptions until they walk into them head first. This means the deception sensors are only triggered if someone “bumps into them”, but it also means that it only takes a few moves (out of potentially thousands) for an attacker to be detected and kicked out.

What Does This Mean for the Future of Cyber Attacks and Data Breaches?

Illusive Networks plans to bring a new age of security to digital businesses that will see less hackers succeeding despite them getting more and more sophisticated every day.

Data breaches could be a thing of the past, as distributed deception means hackers don’t have to just navigate one obstacle like a firewall. Instead, there are obstacles all around them (think security lasers in a museum as a real-life example), and every wrong move can be quickly detected.

But while it might be comforting to know that our personal data looks to be safer than ever, the technology behind Illusive Networks might not be limited stopping hackers in the future.

What if hackers start using it to their advantage? These are people that are highly skilled in tech-endeavors, so surely they’re buffing up on this new technology as we speak and working out ways they can use it to their benefit? If they’re not, maybe they’re missing a trick.

Systems like the one Illusive Networks is using are groundbreaking in the war against cyber attacks but only time will tell if they’re victorious.

Filed Under: Cybersecurity Tagged With: AR, augmented reality, cyber attack, cyber crime, cybersecurity, data breach, Privacy, startup, virtual reality, VR

May 8, 2018

Good Idea / Bad Idea: Protecting Yourself Online Edition

I have always loved to think of my life decisions in the simplistic black and white terms of the classic Animaniacs Good Idea / Bad Idea segments.

Here’s a great example:

Good idea: taking a deep breath before jumping into a swimming pool.

Bad idea: taking a deep breath after jumping into a swimming pool.

Animaniacs is a classic, and I love how good idea / bad idea makes it seem so easy to do the right thing. Of course, over the years I’ve found that life is rarely as neatly broken up into good and bad as the show made it seem. However, keeping your data and yourself safe online is one area in which the good and the bad are fairly self-evident, and it’s also an area that becomes more and more important each day. So let’s do it. Also, watch Animaniacs if you haven’t seen it — it’s a damn classic.

Good idea: Make sure you don’t share information about your location. Do not geotag!!
Bad idea: Posting your address, followed by a long story about how you’re home alone with the doors unlocked.

Good idea: Review your privacy settings on each of your platforms regularly.
Bad idea: Just making everything public, because you “don’t have any information worth stealing anyway.”

Good idea: Dying without ever knowing which Spice Girl you are most like based on your posts.
Bad idea: Authorizing any third party application you’re not familiar with (e.g. Perzonality Testz R Us LOLZ) to access your Facebook data for any purpose at all. It’s not worth it to find out which vegetable most closely matches your face. Though good lord I know it’s tempting. You should also prevent your data from being shared with apps that your Facebook friends are using – find more tips here.

Good idea: Regularly Google yourself in order to find out what potential employers would see if they checked up on you.
Bad idea: Avoid Googling yourself in order to never find out what potential employers would see if they checked up on you. I mean, I get it, but sticking your head in the sand to ensure that no problem ever reaches you at all is no way to….oh forget it, it’s a great way to live and I cannot hear you from way down here in the sand.

Good idea: Limiting the number of digital platforms you give your personal data to.
Bad idea: Setting up a profile on every new platform you encounter, using it for a few months, and then abandoning it. If a potential employer finds your 2003 MySpace blog, they’re the ones who will be sorry.

Good idea: Two factor authentication.
Bad idea: Giggling because “two factor authentication” sounds like something a robot would say. It does, but someone who uses the same password for everything and it’s “password1” really shouldn’t be laughing.

Good idea: Sharing pictures of your cat on Reddit or Imgur.
Bad idea: Sharing pictures of your child on Reddit or Imgur. This is at every parent’s discretion, and far be it for me to tell you how to parent your kid, but also why are you broadcasting the most vulnerable member of your family on the platforms with the most terrifying creeps.

Good idea: Not sharing your password with friends.
Bad idea: Using your most trusted barista at Starbucks as some sort of human password manager. I guarantee that barista will turn on you the moment your drink order becomes too idiotic. Don’t ask me how I know. *Sips Venti Iced Skinny Hazelnut Macchiato, Sugar-Free Syrup, Extra Shot, Light Ice, No Whip.*

Good idea: Taking everything you see online with a grain of salt
Bad idea: Kicking yourself for not realizing that Elvis was homeless in San Diego this whole time. This has nothing to do with privacy, but it’s important for your safety that you always verify what you read online. And also it’s important that I mention this story about Elvis because I was deeply upset when I found out this wasn’t true.

Good idea: Never revealing too much about yourself to strangers online.
Bad idea: Striking up a friendship with a nine-year-old painting prodigy, her mom, and her teenaged sister, only to later discover that it was actually just a middle aged woman playing three different characters. On an unrelated note, I just watched the original Catfish documentary, and boy is it good!!

Good idea: Familiarizing yourself with different kinds of phishing attacks.
Bad idea: Entering your online banking information into a random site so you can receive your tax refund that is being emailed to you and you haven’t submitted your taxes in six years, but this should turn out pretty good.

Good idea: Declining to engage with trolls and bullies online.
Bad idea: Becoming an infamous internet troll and getting enough people angry at you that you eventually get doxxed.

Good idea: Only posting things online that you’re comfortable with everyone seeing, even if your privacy settings are airtight.
Bad idea: Posting specific, detailed rants about your boss on Facebook that would both allow your boss to finally get the leverage to fire you and also enable any enterprising stalker to easily find you at your workplace.

Good idea: Logging off once in a while to read a book.
Bad idea: Seriously, read books, or this is what will happen:

Filed Under: Cybersecurity Tagged With: animaniacs, data privacy, humor, online, Privacy, protection, Security

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • AXEL News Update
  • AXEL Events
  • Biggest Hacks of 2022 (Part 2)
  • Biggest Hacks of 2022 (Part 1)
  • The State of Government Cybersecurity 2022

Recent Comments

  • Anonymous on Five Simple Security Tricks

Footer

Sitemap
© Copyright 2024 Axel ®. All Rights Reserved.
Terms & Policies
  • Telegram
  • Facebook
  • Twitter
  • YouTube
  • Reddit
  • LinkedIn
  • Instagram
  • Discord
  • GitHub