This is a note to self more than anything else, but maybe someone learning R out there finds it useful, too.
I lost some time recently because I kept running R results and only saved them as plots and csvs. As I’m on a budget Macbook with limited memory I can’t keep many results loaded in R (it all stays in memory). Now if I want to go back and change a plot, for example to make it prettier in terms of its dimensions or to add a title or even to filter the data that goes into a subplot… I’ll have to rerun the results.
Saving the results in a csv file is good for future reference, but won’t help with the issue that we can’t easily (?) recreate the results from it. It seems far easier to save the actual R data in an R format.
In fact, my ‘statistics and programming colleague’ has been providing such R files in the ‘RDS format’ for our project to save my time of running them, but giving me the chance to select my own subsets for plots. I’m a bit gutted that I didn’t realise the potential of this function for my own work until today. (I am having to rerun the results in order to create nicer plots; but then it’s also better for archiving the results in an R format than only in csv, I suppose, because things do change or I might find mistakes in my methodology later…).
In order to create an RDS file you have to use this function (from the R documentation; for me its usually sufficient to simply name the object the file path):
For the technical details you can refer to the R documentation linked above or this post that explains the difference between ‘saveRDS()’ and ‘save()’ in more detail. In a nutshell, ‘save()’ apparently saves the object with its name. So, if my original results were called ‘results’ and meanwhile I had created another object called ‘results’ I’d have a problem when I loaded the saved version. With ‘saveRDS()’ we don’t have this problem.
Hopefully, this post can be of use to some of you (obviously check what’s most helpful for your work). I’ll start saving all my important R results in this format 🙂
In this post I make use of the highly praised (at least in a writing seminar I attended last year) method of ‘free writing’. I’m afraid that if I start with a proper draft that needs to be (re- and re-) edited, the momentum for this post will get lost in procrastination.
As you may have noticed, there has been a real down-time on this blog. Over summer. In fact I don’t even know how it’s possible that this summer has already passed.
HOW has it passed? Personally, there were events that I attended: a 4-day summer school, and a 1-week conference, both in July. After my June post I thought I’d write a really cool post about the exciting month of July talking about these two. Then, August came, I moved to another city, and then flew to my home country to visit family. Now it’s September, (actually approaching mid-September…). I had made writing plans for the time after the conference. Now I don’t want to say much about it, only that it didn’t go quite like I planned. But maybe that’s okay?!
One of the main reasons for me to go home for quite a while was my relative’s wedding and the accompanying hen do. They were three weeks apart and I wanted to attend both. I really enjoyed being back in the social circle of my childhood and teenage years, as I have now lived abroad for 5 years. Sometimes I feel a bit lost as to where I belong and what I’m going to do/ where I’m going to be. Visiting my family is then like a grounding experience because many of my relatives enjoy stable jobs/careers/families and places to live (i.e. they haven’t really moved ever or perhaps only once or twice).
It feels good that I gained some distance from my every-day PhD experience by being reintegrated in this family setting. [And friends! I met many of them, spent an entire 4 days at my friend’s place in Hamburg! I also went to the gym! :)] Inevitably, perhaps, some of the ambitious plans for the break got stuck at the procrastination stage. At the same time, procrastinating during my planned tasks let to progress in other areas: I completed my switch from Endnote to Zotero due to the need to collaborate with colleagues. Browsing the web for suggestions, I learned about the possibility of storing my journal articles in a local Dropbox folder and linking them to the Zotero entries. How cool is that? When I double-click on a Zotero item, the PDF pops up, but rather than being stored in some weird, automatically generated, individual Zotero folder, they are all in my ‘PhD reading’ folder.
I also finished a MOOC on R (thanks for the tip by my friend @kimsuekreischer): Microsoft: DAT204x Introduction to R (I think it will re-run in a few weeks’ time; not sure whether the URL stays the same or not). It will be too basic for the pros among you, but I was quite happy with its interactive assignments, which I was actually able to follow. I have also had a try at another R book, this time not by Stefan Gries (see my previous post), but by Matthew Jockers. The writing style is really refreshing and there are many very interesting ideas for analysing texts, with a focus on their meaning. I still have to get used to the perspective/terminology, though, which seems more text mining or NLP-minded than corpus linguistic. While following the MOOC and reading Jockers’ book I have often felt that R is something I can actually learn (apart from all the moments when I mistyped the code and couldn’t find the typo >< !) But what is still a huge challenge to me is how to move from following the instructions to actually designing my own code! This is where some more procrastination developed because I had been hoping to use R for cleaning my data… but… that’s still a little tricky.
In other news, I have sort of followed the #survivephd15 MOOC by @thesiswhisperer and team. Are some of you also participating? I haven’t gotten as involved as many of the other participants, but I am quite happy about this opportunity of dialogue among PhD students. I’m looking forward to topics in the later weeks, as it’s still at the introductory stage (history of the doctorate this past week).
Now there are just 3-4 more days until I fly back to my PhD life. I’ll miss my home, family and friends when I’m back abroad. But it’s good that I’m almost feeling like I have an itch to start again and continue my PhD relationship ;). Incidentally, today is the anniversary of my MA dissertation submission. Time is a strange phenomenon.
I am currently attempting to learn something about the programming language R. Why? Is that even a good idea?
At a few points during the past few years I have considered whether I chose the wrong degree(s). My BA degree was called “English Studies for the Professions (BAESP)” and I really enjoyed it and found everything interesting. At the same time I wanted to get more involved with research and see how linguistics can get really useful. So I moved on to an MA in Applied Linguistics and finally a PhD in the same field. I am really interested in linguistics and think it is a worthwhile area. BUT at times I wonder “Why didn’t I study computational linguistics?” Since my research deals with corpus linguistics this is actually not so far of a stretch. The problem is that I don’t seem have a computational mindset… So far the only type of computational stuff that I can more or less deal with is interactive. During the MA we did some work with the statistical package SPSS which used to be command-driven but now has an interactive interface. For corpus linguistic analyses I have used WordSmith Tools, AntConc and SketchEngine, which are all more or less user-friendly. If anything I get confused by too many buttons and settings that are offered.
When and how did I decide to do something about my non-computational situation?
I have been playing with the thought of getting a little bit more tech-savvy (and at the same time brush up on my understanding of statistics) for a year or so. Throughout my studies I have simply come across so many studies where people do more interesting stuff than I seem to be able to do because I don’t know how to make something like that happen. An example is a Twitter study that I already quoted in my BA project (which was also about Twitter). For my own project I used an online tool (at the time it was called TAGS v3 now there is TAGS v6) to collect a limited number of Tweets, leading to a small corpus. Michelle Zappavigna (@SMLinguist), in her book Discourse of Twitter and Social Media, however, had access to the infrastructure and support necessary for downloading and compiling a large Twitter corpus containing over 100 million Tweets. She used a Python script and the Twitter API. At that time I thought that I’m never going to be able to either do this myself or have the required technical support. While I still don’t know how to do this my attitude has changed slightly. I’m lucky to be cooperating with people from statistics and programming for a project coordinated by my supervisor. This regular interdisciplinary contact has taught me there are things that seem infinitely difficult to me but can easily be done by others in a short amount of time with a few lines of code. Moreover, the cooperation is gradually showing what kind of things are actually possible with programming. In the meantime I have been wondering whether or not it is worth investing time and energy (and money I guess) for learning some baby steps in programming when there are so many experts out there? Well, I don’t know, but I am trying to regain some control over my work…
Here are some interesting view points on coders and coding expressed by Paul Ford in that recent Bloomberg code issue:
Coders are people who are willing to work backward to that key press. It takes a certain temperament to page through standards documents, manuals, and documentation and read things like “data fields are transmitted least significant bit first” in the interest of understanding why, when you expected “ü,” you keep getting “?”
Regarding the question whether or not to learn coding, Ford says:
There’s likely to be work. But it’s a global industry, and there are thousands of people in India with great degrees. […] I’m happy to have lived through the greatest capital expansion in history, an era in which the entirety of our species began to speak, awkwardly, in digital abstractions, as venture capitalists waddle around like mama birds, dropping blog posts and seed rounds into the mouths of waiting baby bird developers, all of them certain they will grow up to be billionaires. It’s a comedy of ego, made possible by logic gates. I am not smart enough to be rich, but I’m always entertained. I hope you will be, too. Hello, world!
[print pp. 109-112, digital Section 7.5]
Personally, I don’t think I can now start to become ‘a real coder’ and ‘compete’ with all those computer science graduates and other professional coders. BUT, the whole thing seems fascinating and if I know a little bit some light might be shed on so many areas that are still dark for me.
I saw info about the ‘Regression modelling for corpus linguistics‘ workshop by the linguist Stefan Gries (held in Lancaster, 20 July) and knew about his books (Quantitative Corpus Linguistics with R – QCLWR – and Statistics for Linguistics with R) so I finally decided to buy them. That’s really the main point for me. [By the way, in the book, Gries argues that R is particularly well-suited for corpus linguistics…] While I know other resources are available, such as MOOCs (I even attempted a MOOC on R but dropped out), I need to see something that’s relevant to my own research (the R MOOC I attempted used data from biology, I believe). Having said that the MOOC introduced a neat little learning environment called ‘Swirl‘ which allows you to “learn R, in R”. I might go back to that at some point. Actually, it’s even hard for me to get through the first 100 pages of Gries’ QCLWR because it’s about the basics with few linguistic applications. But I try to motivate myself to continue by flipping beyond the 100 pages now and then because I can see that soon I’ll be soon (hopefully) able to apply those basics to linguistic problems (I’m almost at page 96 now – yay!). So if someone had made a book about Python for corpus linguistics (is there one?), I might have gone for that, because I didn’t really know anything about which language is best to know. However, I am looking forward to a session at the Nottingham Summer School in Corpus Linguistics entitled ‘Essential python for corpus linguists’ run by Johan de Joode.
My main problems so far
Unfortunately, I am still lacking the coding mindset, but I hope that will change after working through the second, more applied linguistics part of QCLWR. I haven’t done proper math since high school and this step-wise logical thinking about embedding logical/ regular expressions and loops and variables and whatnot all feels a bit foreign to me. More often than not I can’t follow the examples at first sight (usually because I have missed a parenthesis somewhere…). Just have a look at an example of the lines that I have been trying to work through… (Gries, 2009: 89):
I also have difficulties with remembering function names and their argument structures and, worse still, I can’t really follow the R/ RStudio help entries about the functions. The biggest problem is that it takes me ages to go through the tutorial in Gries’ QCLWR. There are still more than a hundred pages left including masses of exercises and assignments and the second book (Statistics for linguists with R) is still waiting for me… Obviously this is not even the only task I’m supposed to be doing for my PhD at the moment…
On the bright side, though, I am slowly starting to feel more comfortable staring at condensed strings of digits and characters and slowly picking up the ability to analyse a command string step by step. Once something does work it really delights me.
What are your experiences with starting to code? Do you think it’s worthwhile to invest in these skills? Which programming language are you learning and why? [And sorry for turning this into such a long post…]