Showing posts with label IT security. Show all posts
Showing posts with label IT security. Show all posts

Tuesday, 21 August 2018

Is democracy possible in the surveillance state?

For today's contribution to our digital citizenship strand, Susan Halfpenny tries to post to the blog without anyone noticing...

An ivy-covered wall on which is mounted a CCTV camera

In recent years we've seen a growth in mass surveillance of citizens by state and intelligence agencies, through the monitoring of digital communications and the gathering of internet usage data. In the brief time that you have been reading this article, the United States National Security Agency (NSA) alone will have selected close to two terabytes of data for review; that’s the equivalent of about 50 two-hour high-definition movies.

In 2013, NSA sub-contractor Edward Snowden leaked classified information from the NSA that revealed numerous global surveillance programmes. The first and most controversial story revealed that the NSA were gathering phone records from telecoms company Verizon, with evidence quickly following that this mass mining of data extended to virtually every other telephone company in America and that data taps were happening on a global scale.

A 2015 report by US think-tank Freedom House found 14 countries imposing new laws or directives increasing surveillance or restricting online anonymity. The level of surveillance already undertaken by the state and intelligence agencies has reached levels that are almost Orwellian. Big brother is watching you… we are just missing the slogan being ubiquitously displayed to remind us of the fact.

“If you've nothing to hide, you’ve nothing to fear”

A common position presented by government surveillance programmes in opposition to their threat to privacy is the ‘nothing to hide’ argument. In Britain, for example, when the government installed public surveillance cameras in towns and cities, a government campaign slogan for the programme declared “If you’ve got nothing to hide, you’ve got nothing to fear”. We have seen this argument emerge again in Britain in response to the provisions of the Investigatory Powers Act, which enables the interception of communications and the retention of communications data, and to the data gathered by UK signals intelligence service GCHQ.

The ‘nothing to hide’ argument is founded on the idea that if you are a law abiding citizen then the data gathered should be of no concern; that the surveillance provides no threat to privacy but rather offers us protection from criminals and terrorists.

Why privacy matters (even if you have nothing to hide)

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.

Edward Snowden on Reddit

Perhaps the most rehearsed response to the ‘nothing to hide’ argument is that benign governments can so easily give way to malign ones. This is a significant reason for caution. But stakes are high even without a fully realised dystopia. The ‘nothing to hide’ argument narrows the understanding of privacy to a singular essence, ignoring the complexity of the concept. Privacy is multifaceted; in a 2011 piece for Wired, Bruce Sterling described it as a plurality of different things that bear resemblance to one another but do not share any one element. For example, the invasion of privacy might be the disclosure of your secrets, maybe revealing how much you get paid, something you didn’t want others to know. In this case the harm to you is that information that you would have preferred to keep concealed is revealed to others. Another example of invasion of privacy may be that you’re being watched by a Peeping Tom – someone observing you as you go about your personal business (maybe taking the kids to school, or eating dinner with family or friends). In this case the harm is that you are being watched, something that you would probably find creepy even if the person watching you doesn’t find anything sensitive or doesn’t share the information with anyone else.

There are lots of other examples and forms of invasion of privacy. These include, blackmail, misuse of personal data, deception, violation of confidentiality, intrusion, misappropriation, and the gathering of extensive data (to state but a few). So, to go back to the concept of privacy, we can see that this involves many things and cannot be reduced to just one simple idea.

Civil Liberties and intellectual freedom

Being able to access information confidentially enables citizens to research, investigate and seek out ideas that challenge the status quo. It enables us to freely explore without fear of retribution; to question politics, culture, democracy and society.

Mass surveillance therefore inevitably poses a threat to citizens’ civil liberties and intellectual freedom, but so does another aspect of state control: censorship. That's an area we'll take a look at in our next post in a fortnight's time.

Wednesday, 11 July 2018

Is the password system broken?

For our latest look at the topic of digital citizenship, Susan Halfpenny must use at least one lower case character, upper case character, number and special character.

Padlocks on a rail

Large data breaches in recent years have led to millions of accounts being hacked and personal information being shared (take a look at World’s Biggest Data Breaches for a visual representation): the Yahoo! hack in 2013 resulted in more than one billion user account credentials being stolen.

Often, compromised security and theft of username and password information can lead to more than just one of your online accounts being compromised. Matt Honan has written at length about his experience of being “epically hacked”, where in the space of an hour his Google account was deleted, his Twitter account taken over and his AppleID account broken into, resulting in the data being deleted from his iPhone, iPad and MacBook.

Hackers will often exploit weaknesses in security systems to access information. For example, in the iCloud leak of celebrity photos in 2014, hackers may have taken advantage of a flaw in the application interface which permitted unlimited attempts to guess passwords. Could companies do more, then, to protect our information?

Encryption and adding layers of security to applications can obviously help, but the major flaws undermining everything else are the limitations of human memory, our collective lack of understanding regarding what factors make a password secure, and our lack of patience. More often than not, though, we will give our information away through phishing emails and poor personal information security like using the same weak password for every account. We might try to come up with more, but, in our modern, busy lives, who of us can remember a hundred and one different and adequately complex passwords?

Even those of us who should or do have a high level of awareness and understanding of information security will still fall prey to laziness. I’m currently trying to use two-step authentication to keep my accounts more secure, but I hate it when I have gone to deliver a workshop and then realise I have forgotten to pick up my phone from my desk, so I then need to head back to the office to collect it in order that I can receive the text message containing the additional one-use code that I need to employ to access my account. It’s times like this that there is a very compelling temptation to switch the two-step authentication off!

The current password system relies too much on our memory and our patience; and on the everyday person who isn’t trained to think about information security all day. We might therefore say that the current passwords system is broken.

So how are hackers exploiting security flaws and human errors?

You may be surprised to hear that hackers aren’t necessary using complicated coding to hack into account. Yes, sometimes large scale attack will take place using programs to attack security flaws, but often passwords can be guessed through social engineering: using the information you share online. For some stark examples, take a look at this article by Kevin Roose where he exploits the digital literacies of hackers to highlight security risks.

Norton collated some useful information about the different ways that hackers hack into your passwords, summarised below:

  • Social engineering: the use of information lifted from your social media to gather answers to your security questions… things like the school you went to, your pet’s name, when you got married, when it’s your birthday, your favourite band… Hackers can gain access to all this information and use it to answer your security questions and guess your passwords.

  • Dictionary attacks: using programs that cycle through a predetermined list of common words often used in passwords. If you are using Password1 as the password for your account then what did you think was going to happen?! To better protect your accounts from dictionary attacks, avoid using common words and phrases in your passwords, or avoid recognisable words altogether.

  • Password crackers: programs used to crack passwords by brute force, repeatedly trying millions of combinations of characters, until your password is detected. Shorter and less complex passwords are quicker to guess for the program. Longer, more complicated passwords take exponentially longer to guess, so the longer and weirder the better!

But if we’re creating lengthy and complex passwords, how can we hope to remember them? Mnemonics can only get us so far. We could potentially use some form of encrypted password management software, but vulnerabilities apply there too: guessing one password may give the hacker access to all of your passwords! Still, it should be more secure than using the same password(s) for everything, because there’s only a single point of failure (the password manager) rather than multiple points of failure (every account you own). Whatever method you choose to use, a set of complicated but securely stored passwords should be far more secure than several easily memorable passwords, if only because they’ll be considerably less guessable.

For more help and advice, take a look at the IT Services tips for choosing a strong password, and test yourself in our information security myths quiz.

Wednesday, 27 June 2018

On the internet, nobody knows you're a dog (unless you tell them)

Our series of explorations into what it means to be a digital citizen continues with Stephanie Jesper pretending to be a dog...

A cat hiding behind a pole

As Peter Steiner’s 1993 cartoon for the New Yorker put it: “On the Internet, nobody knows you’re a dog.”

The internet only knows what you tell it. And what you might want to reveal may vary according to what it is that you want to do. There is a long tradition on internet forums and bulletin boards of using a pseudonymous screen-name or handle. In a large part this was a mechanism to permit discussion of ‘sensitive’ subjects: an alias is a very simple way of distancing your online profile from your off-line one, be it for social, professional, or even legal reasons. But choosing an amusing or clever name can also serve as a fun means of expressing a persona. What is more, pseudonymic screen names can facilitate objectivity in a discussion: social factors such as gender, age, location, education and race may be obscured (partially or entirely), reducing the impact of preconceived biases. A screen name can also allow a user to experiment with or hone their identity (for example in the trans community), and may give confidence to those who might, under their real name, feel socially awkward for whatever reason. This confidence boost can be double-edged, of course: hiding behind a screen name may give you courage to express yourself and your opinions and to explore areas of society and culture that you may otherwise have been too afraid to examine (be it a question of taboo, reputational risk, a fear of failure, or some other impediment), but it can also give you the courage to test the limits of your powers, to be abusive and to threaten other users without fear of recourse. At its most pathetic, this is manifest in Wikipedia vandalism and childishly disruptive behaviour in internet forums; at the other extreme lies persistent trolling, bullying, and even death-threats.

By using an online pseudonym, we make it intentionally difficult for people to connect our online activities to our real-world persona, which is fine unless we actually want that association. We may be looking to promote ourselves, and to connect with people we know, used to know, or want to know in real life, in which case a pseudonym is probably going to get in the way. This is why Facebook and LinkedIn operate real name policies: they’re geared around people finding other people. The problem with being findable, however, is that you can’t especially control who can find you. Having a potential employer find your LinkedIn profile might be a positive thing (assuming it’s an attractive profile); having them find your Facebook profile might be less positive, depending on what you’ve got on there and how locked down it is.

There’s a tradeoff to be had between self-promotion and freedom of expression, and many approaches to take. You could lead a completely uncontroversial life, online and off, and have the tenacity and resilience to be able to cope with any unwanted intrusion. You could live entirely under the cloak of anonymity, but then you may find that you’ve relinquished control of the top search results for your real name, which may not necessarily be a favourable state of affairs. A better solution is to conduct your social activity under one name, and your professional activity under another: some people, especially on Twitter, make use of two accounts – one professional and one social – and Twitter’s own mobile apps support switching between multiple accounts. But in many professions the social use may actually prove a professional advantage, and separating the two can be both a difficult and a false dichotomy to make.

The information trail we leave online isn’t just a reputational concern. We can give away a lot of personal details, and while for the most part this will be just noise in the internet, it is information that can be used against us.

The TV series Hunted provides an effective (and indeed entertaining) illustration of how our online activity can betray our movements, our intentions and our personal networks. In some cases, confiscated devices, phishing attempts and hacked passwords are used as a means of gaining sensitive information, but all too regularly the clues hide in plain sight: on open social media accounts that any of us can see.

If you’re posting in an open forum, anybody can access that information. Tweeting something like…

Holiday! Just hope my new bike can bear 2 weeks without me, languishing in the backyard of 12A The Grove, Chepstow. Forgot to chain it. Oops

…is obviously a bad idea. But communicating even snippets of such information has risks (as we explore on our Subject Guides) because snippets can build up into a larger picture about you and your circumstances.

It isn’t just what we post that poses a potential risk. Our accounts themselves may be sharing more than we might think, as the Cambridge Analytica scandal has demonstrated. If you’ve ever seen your Facebook profile picture staring back at you from the comments section of a blog post, inviting you to participate, or if you’ve seen adverts targeting your interests, you’ll have an idea of the kind of thing that can get passed around. It’s a good idea to go through your social media security settings with a fine-toothed comb every now and again, to lock down as much as you’re able, but inevitably there is a tradeoff between security and functionality. As with so much, it’s a case of striking a balance and being aware of the risks involved.

Monday, 18 April 2016

Cyber Essentials: IT security across the University

Matthew Badham explains why Cyber Essentials accreditation puts the University ahead in bids for research grants.


Maintaining good cyber security - and being able to demonstrate that we do so - is increasingly important. It protects your account and data, and it's a requirement of many funding organisations when they consider allocating research grants. Good news then that in December 2015 the University of York was awarded Cyber Essentials accreditation covering all managed desktops and laptops.

What is Cyber Essentials and why do we need it?


Cyber Essentials is a government supported scheme which is designed to help organisations protect themselves against security breaches. It considers everything from the infrastructure of our network to your desktop PC or laptop. Our compliance with the standard demonstrates that the University meets fundamental security standards for all supported IT provision. Gaining Cyber Essentials certification gave us the opportunity to review all the precautions we have in place, ensuring that we provide an optimum level of security.

Having worked through the checklist of standards required, we can now be confident that we meet all the key requirements, both for the certification, and for funding bodies.

How does this help me?


If you are a using a managed desktop you can be reassured that you are protected by the systems that the University has in place. Increasingly, funding bodies and organisations are seeking assurance that the IT systems of those applying for research grants are compliant with basic security standards. Quoting the University's accreditation is a useful way of providing this assurance and of enhancing your bid.

Who has Cyber Essentials?


Developed by the government and industry, the accreditation is held by an increasing number of organisations who want to demonstrate to customers and external companies that they are taking essential precautions with their IT security. We are one of the first Universities to gain it.


What does Cyber Essentials cover?


Any managed Windows desktop or laptop, and the infrastructure behind your connection. If you are using an IT Services managed desktop or laptop, and saving your files on central filestore, then you are covered by the certification and can specify this on grant applications for sensitive data. If you are using managed iMacs, managed Linux desktop, or unmanaged devices (eg OS X or unmanaged Windows laptops) you are not certified. Unmanaged devices can't claim this certification because we can't ensure that they meet the required standards in areas like updates, patching, and use of anti-virus software. However, we will look at including certification for managed Linux and Apple devices in a later phase of this work.

Image courtesy of www.itgovernance.co.uk

What comes next?


Having successfully achieved the first stage of accreditation, we are now working towards the next stage of accreditation called Cyber Essentials Plus which will require us to meet an even higher level of security standards.

Any questions…


If you'd like to find out more, please contact IT Support who will forward your query to Arthur Clune, the Assistant Director of IT Services (Infrastructure).

Tuesday, 1 March 2016

That sinking feeling...

The horror of losing your work can give you nightmares. Tamsyn Quormby and Pritpal Rehal tell you how to save safely and avoid that sinking feeling...



See ya later, alligator by Jason Mrachina
Used under a Creative Commons license
One of the most common problems that the IT Support team come across is people losing their work, or finding that their files have become corrupted.

Recently, a student came to us when she was unable to access the work stored on her USB stick. She'd been working for hours, and saving her files regularly, but when the thin client she was using was accidentally rebooted, the USB stick became corrupted. We used every trick in our armoury, but we weren't able to restore the files for her. She showed remarkable forbearance at receiving this news; a single tear, and a muttered curse. But it was desperately frustrating to know that if she'd been using the virtual desktop to save to her central filestore instead, the sudden reboot would have caused her no problems.

A single tear by Lauren C
Used under a Creative Commons license
Not everyone is able to be so sanguine in the face of lost work. Every member of our IT support team has had to console a student or member of staff in tears of distress and frustration when their work has been lost - this can happen when a USB stick becomes corrupted or lost, or when a laptop is stolen or irreparably damaged.

Our advice is simple and unchanging:

Don't rely on a USB stick as the main storage method for your work: Not only can the data easily become corrupted, but the device itself is also easy to lose or break.

Don't save the only copy of your work to the local drive of your computer. If work isn't backed up elsewhere, it will be lost if your computer is stolen or damaged.

Where to save your work


So, how should you save and back up your work? We recommend the following:

Central filestore


Every member of the University has a central filestore, their H: drive, with 2Gb of storage allocated to them. Your central filestore is regularly backed up and you can access it from pretty much any device (PC, Mac, Linux, mobile devices...) whether you're on or off campus:

Google Drive


Google Drive offers storage 'in the cloud' (hosted and backed up in multiple locations) that you can access via a web interface or an app wherever you are. As a member of the University of York, your Google Apps offers unlimited quota.

Lessons learned...


If you lose your work, always contact IT support for advice. We'll do our very best to help you. But to avoid disaster, keep your work safe by saving it to your central filestore or Google Drive.

Tuesday, 8 July 2014

Someone wants your password

Joanne Casey would like to know how we make everyone a little bit more suspicious.

There's always someone trying to steal people's passwords...

...and sadly, there are always people who allow them to do it.

A recent phishing email.
The URL doesn't link to mail.york.ac.uk -
your best bet is to mark it as spam.
It's pretty normal these days for emails to arrive in our inboxes purporting to be from 'York Admin', 'System Administrator Team', or similar.

These messages may warn you that your account needs to be validated, alert you to withheld emails, offer you an upgrade, or give you access to a shared Google doc. They include a link, which might appear to be a genuine University URL, and if you click on it you'll be asked enter your username and password.

These emails are always a scam - their sole aim is to steal your password.

Lots of people already know that, and lots more are suspicious enough to check with us before they respond. But each time one of these phishing emails is targeted at University email accounts, we see people hand over their username and password, which means that we have to disable their account as soon as we become aware that it's been compromised.

Our phishing advice poster:
click to view full size
We take various approaches to this:
  • If possible, we block access from the campus network to malicious websites - but this doesn't help if people are at home or elsewhere when they click on the link.
  • We include information about spotting and dealing with email scams on our website, in our user guide, and in flyers handed out at Freshers' Fair and Staff Induction events.
  • We post advice on our Twitter and Facebook feeds
  • When there's a phishing attack underway, we send warnings to departments for circulation to staff and students
  • We've produced a poster that departments can display on their noticeboards
But we know - because we keep having to block accounts - that people keep falling for these emails, and we'd love to find out what else we can do to make sure this message reaches everyone in the University. How do you think we can tackle this? What's the right way to make sure everyone is able to spot a potentially dodgy email? We'd welcome your thoughts and comments below.


Find out more about spotting phishing attacks and other email scams at: