Posted in Conferences/events

Surveillance and religion workshop

This week I had the opportunity to join the first workshop of the Surveillance & Religion Network, organised by Eric Stoddart of the University of St. Andrews and Susanne Wigorts Yngvesson of Stockholm School of Theology. The ‘Surveillance, Religion, and Security – Workshop One’ took place in Birmingham from 17-19 October and was the first of several events for which the organisers secured funding from the AHRC.

My PhD research broadly deals with two areas: corpus linguistic methods (how can we identify patterns of meaning in a discourse?) and their application to surveillance discourse (how is the concept of surveillance discussed in different domains of public discourse?). In the first two years of my PhD I spent most time focusing on the methodological concerns. How do I collect relevant texts? How do I need to process these texts? What corpus linguistic methods are out there? How have other researchers applied and developed them? Which methods are most suitable for my project? As I was dealing with these questions I mainly talked to other linguists.

However, I haven’t engaged much with the other relevant group. So, when I saw the CfP for the first workshop from the Surveillance & Religion Network, I considered this a good chance to initiate some dialogue with surveillance studies scholars. They, I thought, would be more interested in the theme rather than the method and would therefore be able to give me more feedback with that regard.

img_2591
This photo of the refreshments provided at the workshop, in my view, is a good representation of the atmosphere throughout the event: friendly and familiar (the cookies were actually really good!)

Once the event had started, I was happy to discover that my nervousness to attend the event as a linguist was unnecessary. The atmosphere at the workshop was very friendly indeed. Attendees came from very mixed backgrounds: academics (sociology, theology, education, archaeology, linguistics), practicing clerics and even police.

We thus had an insightful programme full of different perspectives on surveillance. My personal highlight was the public lecture by Professor David Lyon, the director of the Surveillance Studies Centre at Queens University, Canada. The lecture was entitled ‘Why surveillance is a religious issue’.

Professor Lyon emphasised one point that was also voiced throughout other sessions in the programme: the increasing ‘surveillance culture’ promotes a climate of suspicion which can only be overcome through a promotion of trust. In Lyon’s view, while surveillance practices can reinforce the marginality of minorities, religious institutions are in a position that allows them to promote trust and hope. Lyon was particularly keen on promoting the idea that we should not give up on our agency, which, as he argues, is in line with the teachings of Abrahamic religions. Indeed, there are small steps we can all take in promoting trust by, for example, campaigning for less surveillance at our workplaces or encouraging our software-developing friends to collect less consumer data. The public lecture was recorded and the audio will be made available soon. (I will post the link here once it is live.)

I have just given the example from David Lyon here, but throughout the workshop we also heard about many ways in which religion and surveillance can be related. For instance, the metaphor of the ‘divine gaze’: how God, in Abrahamic religions, watches over the people. My own contribution was, obviously, linguistic in nature. I presented work related to my PhD thesis: a corpus linguistic analysis of religious themes in surveillance discourse in the academic journal Surveillance & Society and in a collection of blogs. I enjoyed meeting this group of scholars and practitioners who share an interest in surveillance and its social consequences. They also reassured me that my research is of interest for them, as there is not much dialogue between surveillance studies and linguistics.

If you are curious about the relationship between surveillance and religion in particular, you might be interested in the next event by the Surveillance & Religion Network. The ‘Religions Consuming Surveillance – Workshop Two‘ is taking place from March 20 – 22 in Edinburgh and the deadline for the CfP is 15 December. Should you have any experiences related to the theme of surveillance & religion or interdisciplinary encounters, I’d be curious to hear about these in the comments!

Update 26 October: I just found another blog post about Professor Lyon’s public lecture by the organiser of the Open Rights Group Birmingham, Francis Clarke. His attendance (and participation in the question session) is a good example of how academics and public groups, particularly activists, can engage with one another.

Posted in academia, programming

Trying to take up a coding mindset (as a linguist)

Blog_coding_20150625
I originally tweeted this photo on the morning when my second R book by Stefan Gries arrived at the same time as the Bloomberg code issue… was that a sign? I haven’t gotten around to starting through with the book yet though, as I am still working through the first one!
I am currently attempting to learn something about the programming language R. Why? Is that even a good idea?

At a few points during the past few years I have considered whether I chose  the wrong degree(s). My BA degree was called “English Studies for the Professions (BAESP)” and I really enjoyed it and found everything interesting. At the same time I wanted to get more involved with research and see how linguistics can get really useful. So I moved on to an MA in Applied Linguistics and finally a PhD in the same field. I am really interested in linguistics and think it is a worthwhile area. BUT at times I wonder “Why didn’t I study computational linguistics?” Since my research deals with corpus linguistics this is actually not so far of a stretch. The problem is that I don’t seem have a computational mindset… So far the only type of computational stuff that I can more or less deal with is interactive. During the MA we did some work with the statistical package SPSS which used to be command-driven but now has an interactive interface. For corpus linguistic analyses I have used WordSmith Tools, AntConc and SketchEngine, which are all more or less user-friendly. If anything I get confused by too many buttons and settings that are offered.

When and how did I decide to do something about my non-computational situation?
I have been playing with the thought of getting a little bit more tech-savvy (and at the same time brush up on my understanding of statistics) for a year or so. Throughout my studies I have simply come across so many studies where people do more interesting stuff than I seem to be able to do because I don’t know how to make something like that happen. An example is a Twitter study that I already quoted in my BA project (which was also about Twitter). For my own project I used an online tool (at the time it was called TAGS v3 now there is TAGS v6) to collect a limited number of Tweets, leading to a small corpus. Michelle Zappavigna (@SMLinguist), in her book Discourse of Twitter and Social Media, however, had access to the infrastructure and support necessary for downloading and compiling a large Twitter corpus containing over 100 million Tweets. She used a Python script and the Twitter API. At that time I thought that I’m never going to be able to either do this myself or have the required technical support. While I still don’t know how to do this my attitude has changed slightly. I’m lucky to be cooperating with people from statistics and programming for a project coordinated by my supervisor. This regular interdisciplinary contact has taught me there are things that seem infinitely difficult to me but can easily be done by others in a short amount of time with a few lines of code. Moreover, the cooperation is gradually showing what kind of things are actually possible with programming. In the meantime I have been wondering whether or not it is worth investing time and energy (and money I guess) for learning some baby steps in programming when there are so many experts out there? Well, I don’t know, but I am trying to regain some control over my work…

Here are some interesting view points on coders and coding expressed by Paul Ford in that recent Bloomberg code issue:

Coders are people who are willing to work backward to that key press. It takes a certain temperament to page through standards documents, manuals, and documentation and read things like “data fields are transmitted least significant bit first” in the interest of understanding why, when you expected “ü,” you keep getting “?”

[Paul Ford, What Is Code?, Bloomberg Special Double Issue June 15-28, 2015, print p. 24 (digital – free & with really cool animated visualisations! – Section 2.1)]

Regarding the question whether or not to learn coding, Ford says:

There’s likely to be work. But it’s a global industry, and there are thousands of people in India with great degrees. […] I’m happy to have lived through the greatest capital expansion in history, an era in which the entirety of our species began to speak, awkwardly, in digital abstractions, as venture capitalists waddle around like mama birds, dropping blog posts and seed rounds into the mouths of waiting baby bird developers, all of them certain they will grow up to be billionaires. It’s a comedy of ego, made possible by logic gates. I am not smart enough to be rich, but I’m always entertained. I hope you will be, too. Hello, world!

[print pp. 109-112, digital Section 7.5]

Personally, I don’t think I can now start to become ‘a real coder’ and ‘compete’ with all those computer science graduates and other professional coders. BUT, the whole thing seems fascinating and if I know a little bit some light might be shed on so many areas that are still dark for me.

Why R?
I saw info about the ‘Regression modelling for corpus linguistics‘ workshop by the linguist Stefan Gries (held in Lancaster, 20 July) and knew about his books (Quantitative Corpus Linguistics with R – QCLWR – and Statistics for Linguistics with R) so I finally decided to buy them. That’s really the main point for me. [By the way, in the book, Gries argues that R is particularly well-suited for corpus linguistics…] While I know other resources are available, such as MOOCs (I even attempted a MOOC on R but dropped out), I need to see something that’s relevant to my own research (the R MOOC I attempted used data from biology, I believe). Having said that the MOOC introduced a neat little learning environment called ‘Swirl‘ which allows you to “learn R, in R”. I might go back to that at some point. Actually, it’s even hard for me to get through the first 100 pages of Gries’ QCLWR because it’s about the basics with few linguistic applications. But I try to motivate myself to continue by flipping beyond the 100 pages now and then because  I can see that soon I’ll be soon (hopefully) able to apply those basics to linguistic problems (I’m almost at page 96  now – yay!). So if someone had made a book about Python for corpus linguistics (is there one?), I might have gone for that, because I didn’t really know anything about which language is best to know. However, I am looking forward to a session at the Nottingham Summer School in Corpus Linguistics entitled ‘Essential python for corpus linguists’ run by Johan de Joode.

My main problems so far
Unfortunately, I am still lacking the coding mindset, but I hope that will change after working through the second, more applied linguistics part of QCLWR. I haven’t done proper math since high school and this step-wise logical thinking about embedding logical/ regular expressions and loops and variables and whatnot all feels a bit foreign to me. More often than not I can’t follow the examples at first sight (usually because I have missed a parenthesis somewhere…). Just have a look at an example of the lines that I have been trying to work through… (Gries, 2009: 89):

gsub(“(\\w+?)(\\W+\\w*?)\\1(\\W)”, “\\1\\2\\1\\3”, text, perl=T)

Trying to keep track of everything that could be potentially useful in my copy of QCLWR with sticky tags.
Trying to keep track of everything that could be potentially useful in my copy of QCLWR with sticky tags.

I also have difficulties with remembering function names and their argument structures and, worse still, I can’t really follow the R/ RStudio help entries about the functions. The biggest problem is that it takes me ages to go through the tutorial in Gries’ QCLWR. There are still more than a hundred pages left including masses of exercises and assignments and the second book (Statistics for linguists with R) is still waiting for me… Obviously this is not even the only task I’m supposed to be doing for my PhD at the moment…
On the bright side, though, I am slowly starting to feel more comfortable staring at condensed strings of digits and characters and slowly picking up the ability to analyse a command string step by step. Once something does work it really delights me.

What are your experiences with starting to code? Do you think it’s worthwhile to invest in these skills? Which programming language are you learning and why? [And sorry for turning this into such a long post…]