Read Understanding Research Online

Authors: Marianne Franklin

Understanding Research (22 page)

Terms of reference

Suggesting that research can be carried out these days without using the web, that automated, computer-aided research existed before the arrival of the internet, or that document analysis and scholarly writing did, and still occurs without computer-mediation arguably separates the ‘Google generation’ from the ‘silver surfers’ (see UCL 2008). Nonetheless, like any significant technology in terms of its impact on society, politics, and culture (think of electricity, the telephone, printing, or photography), the internet too has a history. It too has changed over time and will do so again, sooner rather than later in this case because what is striking about the internet technologies is that this history is relatively brief; not even that of a human lifetime.
Moreover, many aspects of this ‘short history’ remain disputed; the ownership, control, and future design of the internet are currently in flux, making it a fast-moving area for internet scholars, students, and researchers.

BACK TO THE FUTURE: A QUICK PREQUEL

The internet is not an organized system. No-one is in charge [yet]. It is not primarily a network or even a network of networks. Above all it’s a simple fact – the fact that millions of computers across the world can communicate with each other.

(Ó Dochartaigh 2009: 3; see Gaiser and Schreiner 2009: 7–9)

The internet, as a particular combination of ICTs that permitted computers to communicate and computer-users to navigate these interconnections in a ‘user-friendly’ manner, the world-wide web and its family of internet protocols, was developed in the late 1980s.
4
It took off in the 1990s with what is now called the ‘dotcom boom’, a bubble which collapsed around 2001. Up to then it was the preserve of computer geeks, government military establishments, and software designers working in IT corporation and government R&D departments.

Perhaps you may even think it irrelevant to consider that it was not so long ago that teaching, assessment, and supervision took place face-to-face, by phone, or in written form; virtual learning environments or digital uploading of essays were a thing of the future. This is the first distinction you need to make; the difference between the internet’s core functions and its underlying architecture on the one hand and, on the other, the plethora of products, services, and gadgets that flood the market. Only hindsight can tell which of these were fads and which were there to stay.

Technical aptitude or familiarity with website design, a particular computer code for software development, does not automatically add up to proficiency in internet/online research. Expertise in one area may well be equalled by relative ignorance in another. Why?

  • First, because computer programs are like languages – closed code systems, so fluency in one is not necessarily fluency in another.
  • Second, the web is vast; the amount of data available there, everyday traffic, and cumulative know-how is as well. So know-how and want-to is not only relative but also time-sensitive.
  • But it is also easily shared, iterative and so cumulative.

So which web – internet – is at stake here? There are at least three generations of the internet, i.e. computers communicating with each other as we know it. Some would argue, going back to the 1960s, that this is also not strictly correct. For the sake of argument we stipulate here that what is generally understood as the internet proper refers to three overlapping periods:

  • 1980s, when academic and early internet communities were developing ways for computers, and then people via computers, to communicate; for example,
    ARPANET (USA), Minitel (France), word-processing software, and the personal computer.
  • 1990s, when hypertext transfer protocol, and hypertext markup language and accompanying web-browsers emerged as the world-wide web. This is the age of giants like IBM, Microsoft and Intel, inter alia; also the years in which mobile telephony, and slowly mobile internetting, take hold.
  • Since the early twenty-first century, when Web 2.0 (i.e. social networking) applications,
    smart phones
    , and other devices began to merge email, image, and sound into one integrated multimedia and interactive platform. This is the era of ‘social media’, internet giants like Google, Yahoo!, YouTube, and the global success of social networking sites such as Facebook.
Conceptual issues worth thinking about

There are some terms we need to keep distinct even as they tend to be used both interchangeably and as disciplinary markers; the predominance of one or other of these terms indicates differences in philosophical, empirical, and even political disposition towards the role of the internet in society, as a research field, resource, or source of disquiet.

Cybernetics
: This term was coined in the 1940s for theory and research into human–machine interactions based on how ‘feedback loops’ function in social and automated contexts. A discipline, if not a general paradigm, emerged around the Macy Conferences for Cybernetics (1943–54), which brought key figures from computer science, biology, mathematics, and anthropology together. This line of thinking is integral to the computational logic at the heart of information technology. Hayles (1999: 8) notes, as do many others, the term’s etymological origin in the Greek for ‘steersman’; now extended to R&D into ways of furthering ‘the synthesis between the organic and the mechanical’ (ibid.). Three principles are at the heart of the cybernetic paradigm: information, control, and communication (Haraway 1990, Ramage 2009, Spiller 2002).

The next two terms tend to be used synonymously in everyday language. However, they are not synonyms; the internet is the overarching architecture within which the world-wide web (or web for short) functions. Because the latter is the part most people, researchers and students in particular, use and access on a regular basis it is easy to forget that this is a particular system of internet servers based on hyperlinking software; web browsers, search engines, graphics, audio and video singly or together have developed in the wake of the web’s
hyperlinking
facilities.

The internet
: Because the internet is the largest network of connected computers across the globe, it has become a generic term for the means and medium for all manner of computer-mediated communications; email to computer-dating to gaming; electronic commerce to e-government to political fund-raising. These various functions based in the PC, laptop and increasingly the mobile phone, connect through servers around the world and are enabled by layers of computer codes and the ‘user-friendly’ icons on our screens. In simple terms the term denotes ‘a network of networks’. The way the internet works is by a particular software constellation
based on two protocols, TCP and IP (Transmission Control Protocol/Internet Protocol). These effectively connect a host (for example, your PC or mobile phone) with server/s. It is the backbone of computer-mediated communications as we understand them today.

The world-wide web
dates from the 1990s and was key to the internet’s rapid and popular uptake and the corollary ‘dotcom’ boom, which lasted until the new millennium. Its hyperlinking software protocols characterize today’s internet; these are what allow us to jump from one website or document to another:
hyperlink
. In distinction to how the internet’s origins are rooted in the US military establishment, the web was developed in Switzerland by a British–European consortium led by Tim Berners-Lee in the late 1980s. Not all servers that make up the internet are part of the web.

Web 2.0/social media
: To all intents and purposes commercial social networking sites (for example, Facebook, MySpace, YouTube) that bundle text, images, and moving images into a single, individually based ‘social network’ have become synonymous with the web. On going to press, the global brand-leader, Facebook, had reached the 500 million mark, the number of registered users outstripping the population of many countries. Email and static websites linked by browser software (for example, Explorer, Firefox), the bread-and-butter of internet communications, may be on their way out. Time will tell.

Cyberspace
: A term with many inflections and a rich literary genealogy in science fiction. For our purposes here the term encompasses the experiential, phenomenological dimensions to how the web functions in technical terms or how the internet’s architecture is configured. Tim Jordan’s definition should suffice for now: ‘Cyberspace can be called the virtual lands, with virtual lives and virtual societies . . . [that] . . . do not exist with the same physical reality that “real” societies do . . . The physical exists in cyberspace but it is reinvented’ (Jordan 1999: 1).

Virtuality
is also an elastic term that looks to capture the way ICTs have become embedded in ways of thinking about and living with/in our organic bodies. The ‘strategic definition’ put forward by Katherine Hayles pinpoints this tension and everyday fact of life: ‘Virtuality is the cultural perception that material objects are interpenetrated by information patterns. . . . [This] definition plays off the duality at the heart of the condition of virtuality – materiality on the one hand, information on the other’ (Hayles 1999: 12, 13–14).

Technical terms worth knowing about
Websites, web portals, web-pages

A
website
is a formal presence on the web. For that you need a web address, which in turn is comprised of several elements; see pp. 134–6 below. How a website is set up and designed differs from individual, to organization, corporation, and governmental body. But all have a home page; the first thing that opens when you enter the site. Sometimes this home page comes after or doubles up as a web portal. As the name suggests, a
web portal
is a gateway website; it leads you further into a
range of options for a website. Larger organizations use portals but not exclusively; the United Nations at
http://www.un.org/
is a classic case; a web portal is comparable to a front door, ‘shop window’, or ‘welcome’ sign.

Websites are comprised of
web-pages
, variously made up of text, images, sound, and video material. The website’s organizational hierarchy, multimedia applications, and layout are down to graphic design decisions, expertise, access to a range of software applications, computing power, and bandwidth capacity. As the web becomes increasingly made up of sound, still images and video, websites are less text-heavy yet require more transmission capacity (
bandwidth
). That said, (hyper)text still underpins web-content. Older websites or those without access to enough bandwidth (including electricity), the latest plug-ins or web design know-how are immediately apparent for their larger amount of static, textual content. Questions of looks, taste, and cultural distinctions also count in twenty-first century cyberspace.

Websites, and their composite web-pages, are linked together, and then in turn linked onwards to the web by a computer protocol called HTTP –
hypertext transfer protocol
. The way that they can be located is, as in ‘real life’, by having an address; one that is recognizable and consistently locatable. In web-speak this is the URL, the
universal resource locator
. The address given for a website’s home page provides you with the URL in its simplest form.

Understanding web addresses

When someone refers to the
URL
(Universal Resource Locater) or
web address
they are talking about that line of words, numbers and slashes that appear in the strip at the top of the screen when you use a web-browser; for example, Firefox, Explorer, or Safari. How this address contracts, expands, and operates as you move to and from it is where researching the internet, rather than surfing it in varying degrees of interest or absent-mindedness, really begins in earnest. For instance,
http://www.un.org/
is the URL for the United Nations on the world-wide web. This address brings you to the UN’s website by way of this web-portal/home page. Once you’ve opted for your language option, from there you enter a matrix of interconnected web-pages.

Let’s take a closer look by taking this screenshot (
Figure 5.2
) from Goldsmiths’ website (2010) as an example. As you go deeper into any given website (see the screen-shot) this address (URL) gets longer, depending on how its composite pages and links have been organized, and coded accordingly. Learning how to interpret this first strip of information as you are browsing (this is a way of moving through the web in a less structured, more open-ended way) is one thing. Ascertaining its usefulness when searching (using a search engine or tool in a focused search) in a glance can save you time.

At this point many of you may well be aware that searching the web in our research tends to follow web-surfing practice; namely the use of the ‘back’ button/arrow icon on the top left-hand side of our browser as we ‘browse’ the web. This forward and backwards movement gets most of us where we want to go; at times though it has us lose our place because not every part of the web address is linked in the same way. So, habits and the ease with which most of us search/browse in this way aside, there are ways to be more focused in this respect, especially as we face search results as a
list of top-ten hits. Let’s look again at the web address shown in the screenshot close up (
Figure 5.3
).

All web addresses have three main elements:

  • (a) The
    protocol
    . The most common one is
    Hypertext Transfer Protocol
    (http:// or https://). Another common one for those familiar with news-feeds is rss://. Anything before the :// designates the protocol. These days most web-browsers (for example, Explorer, or Firefox) automatically include this first part whether or not you type it in.
  • (b) The
    domain name
    . This is the core element because it tells you the key information. It has three parts in turn:
    • (i) the name of the server – or host (usually www);

      Figure 5.2
      Screenshot (i)

      Figure 5.3
      Screenshot (ii)

    • (ii) the name of the page, i.e. the service, you have accessed, such as the name of the institution or organization; ‘gold’ in this case stands for Goldsmiths;
    • (iii) the top-level domain. This can be a generic one, which indicates whether it is an educational (.edu or .ac), governmental (.gov), an (international) organization (.org), or commercial service (.com). This domain is also denoted by
      country codes
      ; for example, web addresses based in the United Kingdom end with .uk; those in New Zealand end with .nz, those in India with .in and so on; ‘gold.ac.uk’ in this case indicates that Goldsmiths (gold) is an educational institution in the UK. US educational institutions often simply end with .edu; for example,
      http://www.mit.edu/
      , which is the Massachusetts Institute of Technology’s web address.
      5
      Country codes are usually key indicators for content, national, and legal affiliation of a product or service, though not necessarily the actual whereabouts of the website owners.
  • (c)The file path.
    This last part comes after the first forward slash (/), including other forward slashes. This is the part of the web address that can give you vital clues about how the website is organized and where certain segments of information are housed: the first,
    about,
    and the second,
    thelearningexperienceatgoldsmiths,
    respectively.

When
citing web addresses
certain rules now apply; the whole URL is required along with the date you last accessed the site as a rule (though some citation style guides differ on this point).

Along with these two criteria, an author-name, and document title are also mandatory; if the author is an organization then that will suffice. If the only document title you have is the one designating the web-page, then that will suffice. Whether you incorporate web resources into your literature list or as a separate list depends on your institutional setting as well as whichever style or citation guide you are using. In any case, simply listing URLs in your literature list is not adequate.

Every website owner or administrator has to register their web address; whether they opt for a generic top-level domain name or a country-code depends on availability, commercial, cultural, and political considerations; a whole story and area for advocacy and research in itself.
6
The governing body for this process at the global level is ICANN (the Internet Corporation for Assigned Names and Numbers), a corporate entity based in California, USA.
7

I don’t usually pay much attention to the web address because I can usually see straightaway if the web-page/website is relevant
.

  • True. However, you can also tell a lot about whether it is worth going further if you look more closely at the web address provided at the end of the first set of search results – or hits.
  • If you are after an international organization rather than an educational institution then .edu is probably not where you want to click first.
  • If you notice that the file path is very long then this indicates a web-page embedded in a website. A closer look can tell you whether it is a lead worth following.
  • If you find a worthwhile website whilst linking from another one, make a note of the source URL. Hyperlinks between websites are not always two-way streets and have limited shelf-lives.
  • These architectural features, and their accessibility/aesthetic functions may be an aspect of specific sorts of website analysis, or mapping (see below).

Other books

Moonrise by Ben Bova
Dead End Fix by T. E. Woods
Phoenix Overture by Jodi Meadows
The Gamer's Wife by Careese Mills
Appleby on Ararat by Michael Innes
Catwalk: Messiah by Nick Kelly