Recently I wrote for GESIS on the history of science communication and popularized science. It’s an interesting question when the practice started. As with so many phenomena, this is mostly down to the historian’s favorite thing: a definition.
In terms of Germany, and in terms of high visibility popularized science, Justus von Liebig’s Familiar Letters on Chemistry are a good starting point. Read more about what I had to say about these, and about science communication in the twentieth and twenty-first centuries over at the GESIS Blog.
Recently, I put together a few Twitter threads on my experience with teaching online. (You can find general thoughts here, and a longer thread on the technical aspect here). Since not everyone’s favorite pastime is scrolling through 280-character chunks on any topic, and since a few people asked me to, here’s a version of my thoughts in a more digestible essay format. As was true for these threads is true here: your mileage may vary, sometimes considerably. Still, I hope you can get some useful information out of this.
(Note: all of the linked items are for demonstration purposes only and I do not profit in any way from you using or buying any of them. For reasons to do with German internet law, however, this is still to be considered ad content).
With this article, I’m mostly addressing people—professors, teachers, and others—who have to create content that teaches someone something online. The immediate context is the current Coronavirus pandemic in which a lot of teaching needs to move online, but a lot of this would be applicable to any situation in which the same kind of content needs to be created, even in a future, hopefully not virus-controlled, world.
This text assumes that you are a person who has at least some skills in managing files, uploading content, and using a learning management system of some variety. While I have a fair share of experience also with the “Zoom seminar,” i.e., a class taught in a virtual space using video conferencing software, this is a much more varied topic and will depend much more on the functionality of the software you can or have to use. I may come back to this later.
This article focuses on creating video content that will be uploaded to either a dedicated platform, such as the unfortunately-named but quite powerful Panopto, YouTube, or similar. In short: it focuses on getting content out the door that caters to asynchronous learning environments, whether that’s a class entirely taught without virtual meetings, or only partially.
What’s Needed and What’s Possible
When I began planning my online classes, I defined for myself what I would be able to produce. This involved creating, as I usually do for any class, a draft syllabus that had topics, class rules, and readings on it. This then helped me space out the teaching content in the learning management system (I used Moodle because JGU, my university, set it up and supported it with introductory courses and informational material), and therefore also create a schedule by which point materials would have to be ready.
The class I created most content for was an introductory survey course in American history, geared towards students who had not gotten any kind of education on this topic yet. I chose to create video content for this class for the immediate need, but also with an eye of perhaps reusing it to teach or supplement future classes. As the fall semester now also looks to be mostly online, this proved to be the right choice.
The American history survey would usually have met on Tuesday afternoons. So I set myself the goal of finishing and uploading materials every Tuesday. That way, there would be some regularity and some rhythm to the class which would help students structure their schedule.
I decided that each week was going to get at least one lecture video of me talking to camera intercut with PowerPoint slides. These lecture videos would sum up the major topics and themes of that week’s lesson. They ran between 25 and 75 minutes, depending on how much material there was to get through, and how much additional content I felt I needed to add to the video.
Recommendations as to what the best practice/length for such videos is vary. I aimed for something in the vicinity of 25 to 45 minutes. Sometimes I split up longer videos, sometimes I didn’t. At first I had a hard-and-fast rule never to exceed one hour per video, but decided to scrap this for pragmatic reasons.
There’s obviously a trade-off here not only in terms of how long you can keep students’ attention, but also in terms of longer videos meaning larger files, which become harder to handle, store, and upload. I would aim to keep videos to a maximum of about 45 or 50 minutes (roughly, one episode of a TV drama’s length). That said, in the end it’s better to get things done at all than adhere to the standards of a Platonic ideal of a teaching video.
In addition to the lectures, I would create shorter videos: a general welcome to the strange new world of the pandemic semester, a specific video addressing the why and wherefore of the class, plus some other videos as I saw fit, addressing things I would have brought up in class normally. These, for example, dealt with current issues and how they related back to the class’s topic, details regarding exams, and a video containing tips on how to deal with the amount of reading students were expected to do. Finally, I made one “farewell” video after the class had ended.
I managed to keep to the schedule reasonably well for the first few weeks, but as the semester wore on, I was falling behind. I had had a few weeks of time before the first class started to set things up, which provided a buffer. Yet, by the second month with the increased workload that teaching online required, that buffer was wearing thin. Technology failed me here and there, research ended up taking longer than planned because of the reduced availability of materials due to Covid-19, etc. I started regularly posting videos one or two days late. I felt like I was failing, but in the end I still mostly managed to at least upload each week’s lectures within the week they were for. I notified students when a video was going to be late.
The lesson I take away from this is: plan ahead, and hold yourself to a schedule, but adjust when this becomes necessary. Let students know that this is a new and unfamiliar situation for you as well as for them, and tell them that you’ll try and keep to the posted schedule, but that you are working with the resources you have, and that sometimes delays may occur.
I used two different production processes for the two kinds of videos. This isn’t necessary, but it helped me keep both students’ interest by mixing things up a bit, and to not get bored by always doing the same thing. These were:
“Live to tape” for lectures recorded on my computer and
“Record and edit” for shorter videos where I wasn’t bound to the screen
If I could choose only one process, I would choose the first one.
For process 1, I set up OBS Studio, a free software available for both Windows and Mac computers. I installed it on my laptop, and created three different “camera angles” that I could switch between: One of the PowerPoint slides, one of my face, and one of my face superimposed on the PowerPoint slides. (Check out James Sumner’s excellent “OBS for Teaching” videos on YouTube on how to set up the program).
While this isn’t strictly necessary, it created a much more dynamic presentation, since I was able to edit the video while I was recording it, and to hide breaks when I needed to take a drink of water or clear my throat by pausing on the slide-only screen. OBS lets you set keyboard shortcuts to switch between the screens. I programmed my Bluetooth keyboard to use a few of the F-keys for this and made little stickers so I would remember which one did what.
If you feel this takes too much time, you can also just create one view that has both yourself and the slides/text/material you are presenting in it. This is also what software like Panopto does automatically. If your institution supplies Panopto or something similar (any kind of program that lets you present slides and video and upload the resulting file somewhere accessible by students) and you do not want to fuss with software, settings, etc., then this will be absolutely adequate to the task.
The second process involved setting up a camera (I tried my digital photo camera, an old camcorder, my phone, and finally a dedicated vlogging camera, with varying results) on a tripod, and talking to it. This is a popular style on YouTube, Instagram, and other video-centric websites, so it is worth emulating if only for the familiarity students already intuitively have with it. Watching online content is a large part of many people’s lives, and students therefore may have an easier time fitting your videos into their everyday routines.
Prepare, Don’t Edit (Too Much)
Both processes can produce results that you feel you need to edit for clarity, flow, or just to make them look nicer. I did all these things, and I am quite happy with what I produced. However, editing takes an inordinate amount of time and resources—mental, physical, and technological. Having wasted a few weekends and late nights on this, therefore, my recommendation would be: live with what you’ve produced if you can, and if you can’t only attempt minor edits.
Lecture videos I almost entirely stopped editing after the first three weeks, only resorting to this kind of post production if I made an egregious mistake, or if something went wrong with a recording and it would have meant starting again. The way this became possible was by concentrating on researching and preparing a coherent script of sorts instead.
I spent at least one working day, often more, on creating all the slides and structuring them in a way that they would flow narratively and make sense. Then, I could record “live” and would be done immediately. Even if I had screwed something up so badly that I had no usable recording, this still would have meant that re-doing the video would take at maximum the time it took to deliver the lecture.
Direct-to-camera videos I obsessed over for hours until almost the end of the seminar. I added captions, transitions, images, and video clips, and experimented with using a two-camera setup.
While some students specifically commented that they appreciated the effort and the production value, and I therefore won’t condemn this as unnecessary or too much, this is the first thing I would economize on when it comes to saving time. I shot the last video in five minutes, setting up and framing the camera, talking to it, and uploading immediately.
Accessibility: Picture and Sound
Video and audio quality need to be good enough so as not to be distracting. This isn’t just a nice-to-have thing, but also matters when it comes to accessibility. Students may or may not have issues either hearing or seeing videos, so both aspects need to be solid. Sound needs to be loud enough and not so echoey that it becomes hard to hear, and video shouldn’t be so grainy or dark that people can’t make out lip movements.
For lecture videos, I made it a point to write out much more context on the slides than I usually would, so the slides would already be able to tell much of the story, while not making the font so small that someone watching on a phone screen wouldn’t be able to read it. For direct-to-camera videos I added captions for essential information (such as exam times and conditions). I am convinced that my efforts still weren’t perfect, but I did what I could with the means I had.
Watch Yourself and Learn
It is important to watch yourself deliver your lectures, especially when you are just starting out recording them. The recording situation is different from live in-class delivery, and also different from a Zoom class. You may cringe at how often you say “uhm” or that you seem to be looking away from camera, or messing up your delivery, but you can learn from this and improve.
It is also important to watch the first few videos to make sure there are no technical defects, or if there are, that you know what they are so you can fix them in the next video. Unless a video is completely unusable, however, do not go back trying to fix something, but rather concentrate on fixing it for the next one. An unexpected semester teaching online is in itself an education in how to do a variety of things, and there’s no point in trying to be perfect right out of the gate.
The Personal Is Critical
Teaching online is different from teaching in person in many ways. However, in both your specific interests and your personality can and should come through. Whether that means wearing different kinds of clothes, making reference to your favorite band, or sprinkling (inoffensive) dad jokes throughout your videos, I firmly believe students will connect much better to your content if they understand who is delivering it, that this person is a person, and that they care about what they do.
Your own personality, situation, and role in your organization will likely dictate how exactly this will look for you, but it’s worth thinking about before setting out to make a series of videos that may be students’ primary source of information and connection to their teacher for the duration of a term.
Technology and Gear
In the photography and videography world, it’s an oft-stated truism that “gear doesn’t matter,” while at the same time the most passionate discussions amongst practitioners seem to always involve some piece of kit or other, and how it is better or worse than what someone else uses, or what they’ve used before.
This apparent contradiction actually makes perfect sense: gear that does exactly what you need it to do without inconveniencing you in your workflow in a way that slows you down and frustrates you stops mattering. Once you have defined what you need to produce, and you have found a way to do so efficiently, there is little point in chasing minor improvements just for the sake of improvement.
But gear that malfunctions, is hard to use or to coerce into uses it was not meant for but that you need it to perform, or isn’t reliable, matters a lot. I lost uncounted hours of time experimenting with camera settings that produced unsatisfactory results, microphones that weren’t loud enough or picked up too much background noise, software, and other sundry annoyances. In short: gear and technology that is good enough is fine, but if it’s not good enough, you will be frustrated.
That said, here’s some of the gear I found useful in creating my teaching content:
I used the 2018 model MacBook Air that the university had provided me with. It came with 256GB of storage, 8GB of RAM, enough ports to connect a microphone and an external SSD, and a built-in webcam capable of outputting 720p video at 30 frames per second.
Whether these specs mean anything to you or not: most mid-range laptops and the overwhelming majority of desktop computers that were produced in the last five years would be enough. (NB: If you are recording on a laptop, it is a good idea to prop it up on a laptop stand or a few thick books so the camera angle doesn’t show you from below, which makes videos seem less professional, and for most people also isn’t all that flattering).
I bought the tried-and-true Blue Yeti USB microphone to use with Zoom and similar software in preparation for online classes and recording. While desktop microphones will usually sound the best, they are also often expensive, and prone to noises coming, e.g., from typing on the same desk. I got around this by placing my keyboard and trackpad on stools next to the desk whenever I was recording, but this was not the most practical solution.
Another purchase I made was the Boya BY-M1 lavalier clip-on microphone. This costs around €20, and is often recommended by and for people making content for YouTube. It plugs into the headset port on many laptops, tablets, and phones, and produces very good quality, especially at the price. I used it for a few online calls and videos.
Whatever you buy, make sure your device is compatible with the plug on the microphone you have, as there are different standards even if a microphone will fit all 3.5mm ports. My Apple MacBook and Lenovo Yoga worked without a hitch with the microphone pictured. One thing to make sure of whenever you’re recording sound is to set the input level: If it’s too low, you’ll be hard to understand, if it’s too high, sound will distort. Some programs and devices will have an auto-leveling feature, which may or may not work well. It’s always best to test this out before recording important content.
If you have to create content regularly, the resulting files will take up a lot of disk space. My laptop became painfully slow after the first month or so. I bought an external 1TB SSD in order to free up space, and it made all the difference in the world.
I chose the Samsung T5 SSD, mostly for its looks (the differences in speed between manufacturers don’t matter all that much unless you move huge files frequently) and because it came with both a USB-C cable for newer computers, and a regular USB cable for older models. That meant I could move it between any devices I owned and would own in the future with ease and without having to buy any additional adapters.
Keyboard, Trackpad, and Second Screen
This is by no means necessary, but I used the old Bluetooth keyboard pictured above and a trackpad, as well as a LG LG 27UD58P-B external monitor in my setup to make it more practical and increase productivity. This allowed me to, in essence, remote control the recording and changing of slides, which made things a lot easier during recording. I also used a Rain Design mStand laptop stand to elevate the computer and therefore its camera, so the resulting frame would look pleasing, without walls looking askew.
You don’t need to get a high-resolution 4K (or above) display for a similar effect; what matters is having a second screen that is 1080p resolution at the minimum and ideally the same aspect ratio (typically the 16:9 widescreen used on televisions) as your final video. That way, you can put your slides on the second screen and don’t have to juggle with several windows on the same screen while recording. (Note: if you’re using two screens and you’re on a Mac, PowerPoint is better as a presentation program than Keynote for recording purposes, because Keynote tends to take over all connected screens when you go into presentation mode, while PowerPoint will still let you display another app, like OBS, on the second screen).
If you are using a webcam, your phone, or some other camera you already own to create content, you don’t technically need a new camera for the kind of content usually required for teaching.
I made do with my old iPhone 6S and a tripod clamp to record some videos (the Boya microphone plugs into the phone as well, but you need to use an app like FilmicPro to adjust sound levels, since it distorts when using the built-in video app), used an old camcorder for others, and an old FujiFilm X100S camera sometimes.
The phone was easiest, since I was able to see myself in the screen and frame the shot, and plug in the microphone. That meant only one file with everything in it at reasonable quality once I was done. But it meant not being able to use my phone, filling its storage, and draining the battery. The camcorder also worked okay since it had a screen that flipped out so I also could frame myself, but sound quality was lacking. The FujiFilm camera produced the technically best video, but its only screen was on its back, and its autofocus so bad that I constantly had unusable or barely usable shots.
I finally upgraded to a Sony ZV-1 camera, which came out only in June of 2020. It has decent quality built-in microphones, good video quality, a screen that rotates so you can film yourself, a tally light that lets you know the camera is recording, and it can be remote controlled either with a dedicated remote, or with a smartphone.
It is quite pricy, though, and I only bought it because I also wanted to upgrade my camera anyway. If you are on a budget, the most important features I would look for in a camera are A) a screen that flips up or out so you can point the camera at yourself and film yourself and B) a microphone input.
Creating teaching content takes a lot of time. I hope the above tips will help you so you don’t have to duplicate all the effort it took me to understand what it was and how to plan, create, and deliver it. I don’t have all the answers, but I do have some experiences that may help others.
And one last note: it is worth considering before the semester starts who owns your content and what you might want to do with it after the semester is over. If your institution holds the copyright to all your content but you do not hold a permanent position there, you may not want to make it too easy for it to be reused. In that case, you can sprinkle references throughout that will immediately date the content, be they to current news stories, the date, etc.
If you do own the copyright to your videos, or if you can reuse them for teaching perhaps farther down the road yourself, you can in contrast keep them somewhat “timeless” instead. This makes it possible to reuse them as they are, or with only minor edits. I tried to achieve this by referring ahead to “the next video” instead of saying “next Tuesday,” for example, and by constantly dressing in long sleeve shirts, so it wouldn’t look odd if someone re-watched the videos I made in summer during the coldest days of winter.
Whatever the reason why you are creating teaching content online: Best of luck!
This post was updated on July 19, 2020 to link to the correct MacBook review video and to add a section on the keyboard, trackpad, and second screen I used.
“There is an art, or, rather, a knack to flying. The knack lies in learning how to throw yourself at the ground and miss” begins a memorable passage by Douglas Adams, the witty, absurdist spirit who wrote the radio plays and books of the Hitchhiker’s Guide to the Galaxy trilogy. (It has five parts).
Adams, a contemporary and companion of Monty Python who combined that style of irreverent comedy with a penchant for science fiction, died unexpectedly of a heart attack in 2001 at the age of 49, but by then had already managed to make an indelible mark on popular culture.
He was a famously procrastinatory writer. His publisher Sonny Mehta in 1984 all but locked Adams in a hotel room for a week to finish a manuscript, an incident so steeped in myth that it has even been turned into a play. Journalist Rod Stewart, writing in The Bookseller, called his piece about the event “The Berkeley Hotel Hostage,“ though it becomes clear from context that Douglas went willingly, more guestage than hostage, carting along a typewriter and a guitar. He would use the former to hammer out pages that overwhelmingly ended up in the paper bin, and the latter to play Dire Straits songs to himself and what I have to assume was an only marginally enthused Mehta.
I own neither a typewriter nor a guitar, and save for occasional trips aboard helicopters and airplanes, have never attempted to fly. But when I read the lines about flying (placed among the first couple of pages of Adams’s So Long and Thanks for All the Fish, the book he finished in that hotel room), they spoke to me. A few more:
The first part is easy. All it requires is simply the ability to throw yourself forward with all your weight, and the willingness not to mind that it’s going to hurt.
That is, it’s going to hurt if you fail to miss the ground. Most people fail to miss the ground, and if they are really trying properly, the likelihood is that they will fail to miss it fairly hard.
Clearly, it is the second part, the missing, which presents the difficulties.
One problem is that you have to miss the ground accidentally. It’s no good deliberately intending to miss the ground because you won’t. You have to have your attention suddenly distracted by something else when you’re halfway there, so that you are no longer thinking about falling, or about the ground, or about how much it’s going to hurt if you fail to miss it.
It is notoriously difficult to prize your attention away from these three things during the split second you have at your disposal. Hence most people’s failure, and their eventual disillusionment with this exhilarating and spectacular sport.
I’ve never attempted to “throw myself to the ground and miss” in order to float, but I’ve always believed there was more than a little about the process of writing in Adams’s words. Call it “getting into the zone” or achieving a state of “deep work,” writing happens when most other things in the world fade away. You fly by forgetting to fall.
Procrastinator extraordinaire Douglas Adams surely knew about that. He knew about distractions, too. Don’t be misled by his use of the word here. The distraction needed so you can fly is not the same as all the distractions that keep us from whatever needs to be done. Here it is, rather, the key that lets you enter a mental state in which you are writing and thinking about how to best write rather than thinking about all the many other things going on. It is a distraction from the regular din of the news that nudges you into a comfortable place.
Much like Arthur Dent, the unlikely hero of Adams’s best known work, I’ve found myself more often than not in casual clothing, facing something quite unusual. (If you don’t understand this reference, do yourself the favor and find any visual representation of Dent). Much like Arthur Dent during his first adventures, I’ve been reeling from the newness of it all rather than formulating a coherent strategy of adaptation.
I haven’t been writing much.
It is of course normal in this situation that our “productivity,” for whatever life is still left in that soulless marker of a Protestant Work Ethic™ gone global, has tanked. Not for everyone individually. One friend reports from Californian quarantine that writing is going better than ever. We all deal with uncertainty differently, and as with the world in general, some of us do so in a manner that society will ultimately reward while others do not. Yet I suspect strongly that in aggregate, the doing of things usually considered necessary to be done has gone down. Other things have taken their place.
While I berate myself for terminal laziness (and yes, there’s that discussion to be had as well, about whether laziness is even a useful way of framing this), I have still learned new skills and done days upon days of research. The skills are more technical and organizational though, and the research often devolves into watching YouTube videos on how to wring what I need from an obdurate piece of technology. They are different skills for a different life in another society entirely. Such a society is what we have, so they are not for nought. They appear superfluous to myself from three months ago, but that person was adapted to three months ago Earth.
I’ve become proficient at setting up microphones and cameras and software, and much more proficient at directing and editing myself and integrating that media into virtual classroom environments. All of these things needed to happen and all of these things needed to be done. But I cannot escape from both the knowledge and the feeling that in order to move forward, in order to even have a career after the pandemic hopefully one day soon has run its course, I have to do something else. I have to write.
Unlike Adams, I can’t afford to lock myself away with room service and a hot tub, but I am trying to create such an environment, both physically in my home and mentally in my work habits. It’s slow going. It’s hard. And I have it good, comparably. I have no toddlers to entertain or eight-year-olds to homeschool. But writing is hard, just by itself. For every day of serene flow that puts paragraphs on the page by the dozen, there are weeks of reading and thinking and making notes and that creative technique known as wanton couch-sitting. Writing, for me, is even harder when it is so difficult to miss the ground, difficult to distract myself from the gestureswildly of what is going on to find that little spot of sparkle that sways me into the stream.
As Stewart wrote about the final product of Adams’s luxury confinement: “The patchwork alternates between the surreal and the everyday.” And so it does. Quite by accident, that is also an astonishingly accurate description of how the world has worked itself out these past five weeks.
Yet, even in this new world, writing will have to be done. When we no longer communicate in ways we have become accustomed to, some new ways must emerge. And some old ways will need to come back. Writing is thinking silently but forcefully into the ether. Ideally, it is communicating with the benefit of forethought. That is a worthy thing do be doing, and I will be doing more of it again. Just, please, dear editors, collaborators, students, and friends: I’m still learning to fall and miss the ground accidentally. But I am developing the willingness not to mind that it’s going to hurt.
Morgens die Pressekonferenz des Robert Koch-Instituts, mittags Nachrichten mit allerlei gesammelten Statements aus Ministerien, Ländern, Städten. Am Abend spricht die Kanzlerin. Dazwischen ein steter Strom aus und auf allen Kanälen. Bekannte auf Twitter teilen Memes ebenso wie Links zu Studien historischer, epidemiologischer, oder soziologischer Art. Man hört viel schon Gehörtes immer wieder, und jeden Tag etwas Neues dazu. Hände waschen ist schon so vorletzte Woche. Kontaktvermeidung ist von letztem Wochenende. Selbstabschirmung zu Hause ist das Jetzt. Bitte nicht hamstern und lasst das Toilettenpapier im Regal, außer natürlich ihr braucht Toilettenpapier.
Aber was heißt “Hamstern”? Und wann kauft man nur etwas mehr weil man möglichst nicht jeden zweiten Tag wieder das Risiko eingehen möchte zu infizieren (asymptomatisch) oder infiziert zu werden? Man hat es schon Leid, das Gebetsmühlenartige an den konstanten Wiederholungen von “die Lage entwickelt sich dynamisch”. Natürlich tut sie das, täte sie es nicht wäre da nicht viel mit entwickeln. Auf den sozialen Medien überschlägt sich sowieso alles, vor allem in Zeiten des unpassend benannten “social distancing”.
Körperlich rückt man voneinander ab. Wieviel? Nun, das entwickelt sich dynamisch. Sozial aber rückt man zusammen. In dieser Situation verlangt es einige nach Autorität und Handlungsvermögen des Staates. Aber bitte nicht soweit, dass die persönliche Wahrnehmung es als gefährliche, übergriffige Repression wahrnimmt. Wo ist die sinnvolle, demokratisch vertretbare Grenze? Wer hilft einem bei der allfälligen Orientierung?
Es ist die Stunde derer, die die Welt einordnen und erklären können. Was derzeit zählt ist eine andere Art von Autorität, die der Expertise, reell oder gefühlt. Ein bislang in Fachkreisen eminenter aber öffentlich nicht groß in Erscheinung getretener Virologe namens Christian Drosten hat jetzt einen Podcast. Das Bildungsbürgertum verpasst keine Folge, schließlich gilt es zu wissen was Sache ist. Eine einschlägige Studie zur Grippe von 1918-20 ist Drosten vor einigen Tagen neu, er erwähnt sie mit Achtung. Und muss dann sofort berichtigen: nein, nur weil St. Louis 1918 gleich alle Schulen geschlossen habe, müsse man das hier jetzt nicht auch tun. Sein Arbeitgeber, die Charité, hatte das anders verstanden, Medien ebenso. Ein paar Tage später ist alles Makulatur. Die Schulen sind zu, und in Experten- und Politikkreisen regt sich wenig Widerstand.
Meine Eltern setzen unterdessen auf Professor Kekulé, der empfiehlt zu Hause zu bleiben aber weg von den Menschenmassen unbedingt dann doch an die frische Luft. Für meine Eltern funktioniert das. Ich lebe in einer Millionenstadt.
Die Infektiologin Professor Addo von der Uniklinik Hamburg-Eppendorf beantwortet konkret und druckreif Fragen zur Impfstoffforschung. Wohlwollend auf Twitter geteilt zieht das als ersten Kommentar nach sich, sie mache falsche Hoffnung mit dem Bericht, dass die Impfstoffentwicklung schon in vollem Gange sei. Denn es würde ja noch eine gute Weile dauern, bis der Impfstoff verfügbar sei. Das hat sie zwar nie bestritten, aber kontextualisiert hat es seitens des Fernsehsenders auch keiner.
Die Interviews mit den Expertinnen und Experten wollen von denen gleich selbst die Einordnung. Im Fernsehen müssten die Virologen nach den Sendeverantwortlichen am besten auch Epidemiologen sein sowie Soziologen und Medizinhistoriker gleichermaßen. Harald Lesch taucht plötzlich jenseits von Philosophie und Physik im Fernsehen auf und erklärt, die Gesellschaft sei “auf Kante genäht”.
Ebenso wie Menschen das Heil in politischer Autorität suchen, so suchen auch die, die in der Viruskrise ihre Welt auf den Kopf gestellt sehen, Orientierung. Es braucht den einen Erklärer. Den, auf den man sich verlassen kann.
Den—denn Erklärer sind noch immer in hohem Maße als Männer gedacht. Dahinter steckt die Gesellschaft in ihren Grundstrukturen als Ganzes, aber auch eine jahrzehntelange medial-marktkonforme Schaffung des Typus des Public Intellectual seit spätestens den 1960er oder 1970er Jahren. Verlage, Fernsehsender, Magazine: sie alle suchen, finden, und erfinden seitdem immer und immer wieder den Mann der die Welt erklärt. Der Komplexität reduziert ohne dabei so zu klingen als wäre alles einfach. Man braucht das Feingefühl wie simpel man es machen kann, aber simpler darf es eben nicht klingen, selbst wenn es das ist.
Der Soziolge Daniel Bell hat sich einmal als in “Generalisierungen spezialisiert” bezeichnet. Ob Bell dieser Claim zukommt sei dahingestellt, aber solche Menschen sind nun offensichtlich gefragt. Erklärer, die kontextualisieren und weitergeben können, die vereinfachen können ohne zu verflachen, die Wissenschaftskommunikation und politische Entscheidungsprozesse verständlich machen können.
Dass alle unsere Erklärer ab einem bestimmten Punkt daran eben scheitern sollte uns dabei aber nicht notwendig dazu inspirieren uns bessere Allerklärer zu suchen. Eher sollte es uns dazu bringen, uns damit zu versöhnen, dass die Welt komplex ist und komplex bleibt, und dass das Wichtigste in diesem Moment ist, Lese- und Verständnisstrategien zu haben und mit anderen zu teilen, wie mit all der Information am besten umzugehen sei.
Pardon, das Zweitwichtigste. Gleich nach dem Händewaschen. Auch wenn das von vorletzter Woche ist.
In August, I posted a long thread on Twitter regarding the problem of bad-faith distortions of history. It was set off by an especially egregious statement by Dinesh D’Souza, the poster child for making absurd arguments supposedly underpinned by historical fact. He likened the program of the modern Democratic Party to that of the Nazi Party in Weimar Germany. I countered with a primary source analysis of the NSDAP program, pointing out why his take was patently preposterous.
The thread – see below – became quite popular, according to Twitter it was seen by at least 500,000 people (the original tweet) and as many as 1.4 million (the parts of the thread taken together). This has given me some hope, that we can reach people outside of academica by highlighting how to think historically, and what it means to anaylze, criticize, and contextualize sources.
The problem with D'Souza is not just that he is bad at history. He is actively attacking the fundaments of what it means to research, interpret, write, and understand history while laying claim to the term. His method would be sophistry, if it were in any way subtle. 1/28 https://t.co/FMUAbwicLA
There’s writers in there like Pynchon. But if he were a realist. There’s thorough knowledge of American history and the people who wrote it down and made it up. There’s glee in repetition and reinvention, and smart set-ups that read like omissions at first, and omissions that you then make do the work of a smart setup. Tom Wolfe’s signature essay “The ‘Me’ Decade and the Third Great Awakening” is a joyful trip into the American maudlin, dateline 1976. It’s narrated like a novel, the art form Wolfe denied its primus inter pares place in the literary quiver; that is, until he succumbed to it.
Wolfe, who wrote like he dressed – impeccably and with sprezzatura, but in an initially off-putting way – died on Monday, May 14, 2018. His iconoclasm did not end in death. The New York Times obituary managed to get his age and birth date wrong in its first go-around, and required a second correction to fix the title of one of his novels. Much hyperbole, as with any literary death, has accompanied Wolfe’s passing, as has much reflection on his place in the media world, and the media that he placed in the world.
If Wolfe was an icon, he also behaved like one. The white suit he trademark wore, Wolfe said, made him look like a Martian, and that helped people relate to him, tell him their stories, see him as an impartial third, an observer from a disinterested place reporting back to the mothership. Only the mothership sat pat in New York City, sharing the life of the elites he castigated in his most successful novel, Bonfire of the Vanities. Before Wolfe was a novelist, however, lauded and applauded first, panned and criticized for his later works, he was non-fiction writer. A journalist; a New Journalist. In essence, a fiction writer of non-fiction.
Reading the “Me Decade” essay, you’ll be struck by what passes for reporting here, even by the standards of the scene-setting New Journalism that Wolfe co-created with, among others, Hunter S. Thompson. In one of the most-cited passages (presumably because it starts the thing off), Wolfe reports from the plush, solvent-cleaned floor of the Ambassador Hotel in Los Angeles. His heroine is the woman who screams “hemorrhoids!” when asked to name the one thing in her life she most wants to get rid off. Wolfe smilingly berates her on the self-centeredness of her choice, then goes on to imagine a deep dive into her mind, constructing the story that lies behind that moment of clarity and catharsis:
She begins to feel her hemorrhoids in all their morbid presence. […] Well–for God’s sake!–in her daily life, even at work, especially at work, and she works for a movie distributor, her whole picture of herself was of her… seductive physical presence. […] When she walked into the office each morning, everyone, women as well as men, checked her out. She knew that.
But, alas, the hemorrhoidal “peanut” intervenes (the same essay features a description of Jimmy Carter, so who knows, peanuts may have been on the national mind in America two centuries post Declaration of Independence), messes up that picture, creates a cleavage between how she looks (“The Sexual Princess!”) and what she wants vs. what she thinks about: “As she smiled sublimely at her conquest, she also had to sit on her chair lopsided, with one cheek of her buttocks higher than the other…”
The age of the piece shows. Not just in its inherent unquestioned sexism or casual inclusion of homosexuality as among the host of things people at self-help or self-actualization seminars want to get rid of, or the mentions of lifestyles involving either “Sevilles and 450SL’s” or “Superstar Qiana sport shirts,” things a present-day reader likely will have to punch into a search box. That is, if they don’t just stare at the brand names in confusion for a microsecond and then decide they don’t care enough to even do that.
The age of the piece also presents itself, and somewhat sadly, in the fact that there was a gushing inventiveness to Wolfe’s 70s and 80s pieces of reporting, pieces which he was actually able to get published and paid for, and that’s not something we’re used to anymore. New Journalism, to be sure, was influential. But its excesses have been cut down to size. Journalism, even the highly readable kind, seems tame in comparison today. Not in content, but in language; edited or self-edited into tonal conformity. Even if a writer as gifted as Wolfe produced something in a style as eccentric as Wolfe’s today, it obviously cannot have the same effect. You can only break new ground once. The frivolousness would be muted.
That is not to say that journalism is rulebound now. In an increasingly fragmented media ecosphere, the fraying and frothing fringes, the extremists and rightwing million-dollar pundits hired to be performance artists against the other side, have long lost any semblance of decorum. But inventive, happy about the language they use, knowledgeable and excited to try out and on words for size, to break the molds of the newspaper article headline/standfirst/body text triad or the scripted TV narrative, they are not. And in terms of value: on cable, certainly, money follows the performance. But are writer-journalists highly paid? To the tune of being able to afford 12 rooms in Manhattan?
Admittedly, there is a “look here” flashiness in a Wolfe essay. There’s that stringing-too-many-words-together-with-hyphens tendency (on Jimmy Carter: “he was of the Missionary lectern-pounding amen ten-finger C-major-chord Sister-Martha-at-the-Yamaha-keyboard loblolly piny-woods Baptist faith”). And some of the things that in 1976 were fresh forty plus years on no longer are.
But these are not just the rantings of a word-heavy Cassandra who doesn’t like the way that America, filled to its cultural gills with baby boomers cusping into adulthood, is suddenly behaving. They are that, but they also are the observation of an astute critic, of someone trained in American Studies by his Yale PhD program, someone whose referential drive-bys include swift freethrows to Perry Miller and Max Weber and snide digs at Walter Gropius and Mies van der Rohe – a prefigurement here of Wolfe’s more ornate taste in architecture and his longer takedown of the Bauhaus in his 1981 From Bauhaus to Our House. Even if you find, as I do, the criticism of the German-inspired clean, functional architecture and design misguided and overwrought, you cannot deny that, it has in the reverence that acolytes award it, departed from the ideal of a democratic, down-to-earth way of equipping people with living spaces and material goods. As Wolfe writes in a footnote included with “Me Decade”:
Ignored or else held in contempt by working people, Bauhaus design eventually triumphed as a symbol of wealth and privilege […]. [T]he Barcelona chair [.] now sells for $1.680 […]. The high price is due in no small part to the chair’s Worker Housing Honest Materials: stainless steel and leather.
Some of this is just Wolfe being a grouch, a Christopher Lasch-type social critic with more of a flourish and less academic rigor – in his 2006 NEH Jefferson Lecture, for example, Wolfe casually omits both Johann Gottfried Herder and Henri Bergson’s coinages when he talks about “homo loquax,” misidentifying that creature, too, as “talking man” instead of the “chattering man” Bergson had in mind.1
Tom, “Me,” and You
But Wolfe’s writings contain truly original insight and a rare talent for telling stories of approachable verisimilitude. Ken Kesey, the author of One Flew Over the Cuckoo’s Nest and subject’s of Wolfe’s The Electric Kool-Aid Acid Test, attested Wolfe that his book was highly accurate. It was a driving narrative about an almost Christlike guru and his “Merry Pranksters” traveling the United States in a VW bus, taking LSD-laced Kool-Aid. Wolfe’s imaginative language made that book as much as its story did. Writing upon the publication of Electric Kool-Aid in 1969, the Guardian2 summed up Wolfe’s use of language and cadence:
The style uses the repetition and the compressed adjectival forms of a poem, and the reader is pleasantly caught up in the internal rhythms. For all its seeming superabundance of punctuation and participles, every word seems placed with a care and a skill of contrivance which should command respect.
Gay Talese, fellow New Journalist and impeccable dresser, called Wolfe a magician for his use of words. But it was not only words that Wolfe’s writing consisted of. It was ellipses, too, dots, dashes, exclamation marks and transcribed primal screams:
Wolfe’s greatest talent though, perhaps, was the effortless translation from eye to analysis and judgment to page. And judgement there was in spades. Wolfe may have been avantgarde in style, but he was conservative in substance. He supported Ronald Reagan and George W. Bush, citing the latter’s “decisiveness” in starting the Iraq War as something that impressed him. Wolfe never held back with opinions, whether clearly stated or heavily implied in his writings. The criticism leveled at these opinions is valid. But there remains, through it all, the fact that Wolfe was influential. In style, and in substance.
The most important thing a writer can do is to observe, to study and then write. Wolfe did that, iconically attired, days piling on the next, 1500 words sunrise to sundown, the equivalent of ten triple-spaced pages at a time. For decades. Ten triple-spaced pages at a time.
The lecture also begins with a wry humblebrag that is Wolfian to the core: “Ladies and Gentlemen, this evening it is my modest intention to tell you in the short time we have together . . . everything you will ever need to know about the human beast.” National Endowment for the Humanities. Awards & Honors: 2006 Jefferson Lecturer. Tom Wolfe Lecture, “The Human Beast”. ↩
The future would be brilliant, glowing, and rich in color. The future would be lifelike on the screen as much as off. The future, Hollywood attempted – and attempts – to tell us, is 3D.
Watching Steven Spielberg’s Ready Player One in 3D a few days ago, I was once again struck by how bad a deal 3D is for cinemagoers. Despite a discount for a weekday showing, I paid an extra three Euros for the privilege of seeing the movie in 3D. It felt like less for more. I am not alone in that assessment. Vice‘s Meghan Neal summed it up succinctly:
The problem is, 3D is often not a premium viewing experience at all. For many people (myself included) it’s a far worse experience than seeing the movie in regular 2D, so now you’ve paid extra money for two hours of unpleasantness.1
Two years earlier, in 2014, Jeff Bakalar, writing for CNET.com, had come to the same conclusion:
As much as the movie studios would like the opposite to be true, 3D movies are handicapping the theatergoing experience and there’s almost never a time you should pay extra for it.2
I paid more money in order to see a movie that was, as the overwhelming majority of so-called “3D” movies are, shot in 2D. Ready Player One, chock full of CGI scenes set in a futuristic, immersive alternative reality, was nonetheless exposed onto Kodak 35mm film stock in Panavision cameras whenever actual actors took the stage. This is not a bad thing. In fact, though I am perfectly happy to record 4K video on my iPhone for home videos and low budget cinematography using DSLRs and mirrorless cameras has opened doors for many creatives, movies shot on film still to me look “right.” A preference, for sure. But one that was informed by over a century of motion pictures that were naturally captured and presented this way since there was simply no alternative.
If you go into a movie theater today you will, unless you happen to live in a place where an IMAX theater is available and playing the film you want to watch, see a 3D presentation that uses the same projectors as 2D movies use. Since 3D, however, has to display two distinct images which are then filtered by a pair of unwieldy and seemingly always already smudged plastic glasses to create the 3D image, the brightness of such a presentation is about half of what you’d see if you were watching a 2D movie.
To be fair to the technology, this isn’t inherently a problem with 3D. In the age of high-powered Laser projection, this loss of brightness could in theory be compensated for. A general lack of care in cinema projection rooms the world over, owing to cost-cutting by cinema owners by hiring less, and less qualified staff than would be needed to run projectors smoothly and without a hitch, is most often to blame here.
In the words of Den of Geek‘s Brendon Connelly:
The 3D format doesn’t inherently result in darker, dimmer, less focused images, but the typical multiplex has a terrible track record in taking the few, neither difficult nor expensive steps to make sure 3D is being presented properly.3
In essence, then, through a combination of technology, marketing, and the economic bottom line, 3D today means you a) pay more to b) watch a movie that was not captured in 3D. To see it you have to wear c) a decidedly non-futuristic and often uncomfortable pair of plastic goggles which d) make the picture significantly duller. The 3D effect, in most cases, does not make up for this. I’d prefer to see a brilliant, colorful image instead of a dull excuse for a presentation in which sometimes, maybe, something pops out of the screen.
Not everyone is bearish on 3D, though. Michael V. Lewis, in a 2017 piece for Variety argued that “3D will continue to transport our imaginations light years ahead.” Then again, Lewis is the CEO of RealID, which bills itself “the world’s largest 3D cinema platform with more than 32,000 screens in 72 countries.”4
Past Futures of 3D
3D for decades has meant the promise of a better, more immersive medial experience. Of a narrative world that would not only be presented to you, but envelop you. Without fail, it has been, for the most part, a disappointment. This is despite 3D image projection having a long history dating back to the middle of the nineteenth century, and the technology to make it work reasonably well as projected still or moving pictures has been around since the 1920s.
The 3D in use today is essentially the same basic technology developed by Edwin H. Land and others at Polaroid. Incidentally, the company that became synonymous with instant photography, first produced polarizers that split light. Since a polarizer lets through only light of a certain orientation, by using one on each eye with their polarization offset by 90 degrees, one can filter out the unwanted parts of an image. A projection system that works in sync with these glasses can thus project two images onto one screen at the same time, and the glasses will feed only the image meant for the right eye to the right eye and vice versa.5
The first big push for 3D came in the 1950s. Much like Cinerama, Cinemascope, and other widescreen formats, as well as the ever-elusive Smell-o-Vision, it was born out of the confluence of technological progress and the advent of television.
My own father recalled how his parents, financially stable but far from rich in Post-War West Germany, calculated one day the cost of the many times they went to the cinema, and decided that amortization for what was then a splurge of an investment, a black and white tv set, would not be too far off. Their case is typical. People who had become accustomed to being entertained by the big screen several times a week traded in the size of the movie screen for the convenience of their own couches. The movie industry reacted with complicated, expensive, impressive, or simply different ways of presenting movies. This is what gave us Cinemascope widescreen motion pictures, and improved definition and depth from 70mm projection. And it gave us 3D.
Ultimately, the technology was brought down by some of the same issues that surround 3D today: price (studios wanted to rent out two copies of a film for 3D, since each eye needed to see one), and the sloppiness of projectionists.6
3D had another straw fire in the 1980s. This, too, did not last.
The current 3D craze dates to 2010. To be more exact, it dates to one movie (and, some might argue, the only movie) which showcased what the technology was capable of: Avatar. With studios riding the coattails of Avatar‘s success and hastily converting movies to play in 3D that gained little or no benefit from the technology, audiences quickly became fed up with 3D – again. Although 3D has not died, the hype fizzled pretty quickly.
You may have noticed that there is essentially always something like twenty-five years between these attempts. Time for a generation to have grown up that does not remember 3D was ever a thing, and that does not remember the problems that come with the technology and/or its lackluster implementation. Enough time, too, to hope that technology has moved on to obsolete the issues with 3D that led to the end of the previous wave.
You Don’t Want 3D (Yet)
All 3D Systems suffer from drawbacks. Most importantly, that all of them require specialized capture systems if the 3D effect is to be convincing, and that none of them work with just 3D projectors alone. You, as the viewer, also have to do something. You have to put on 3D glasses. Unless this limitation goes away, and leaves behind with it the headaches (literal, in the case of 3D cinema, as well as virtual), 3D will not become the default option for screened entertainment.
Technologies fail, over and over again, if they are not able to insert themselves successfully into the everyday habits of people. Despite being ostensibly higher quality or more convenient than what came before, media formats of all kinds have failed because they were too expensive, incompatible, suffered from limited availability, or a combination of all of the above.
8-Track cartridges lost out to the smaller, cheaper, cassette tapes. The APS photo format, pushed by camera manufacturers in the 1990s and 2000s, offered more flexibility than the 35mm film then typically used for snapshots at the cost of somewhat reduced quality. But not enough. It never gained a strong foothold in the market, and was eventually doomed by a technology that had almost all of its advantages, but none of the disadvantages: the digital photo camera.
Augmented reality (AR), too, has been much more successful to date than virtual reality (VR). Augmented reality only requires you to use technology you already own in a somewhat different way. The Pokémon Go craze of a few years back was made possible because many people had the necessary technology – a smartphone – already in their pockets. VR, in contrast, requires us to buy expensive, specialized displays in the form of unwieldy glasses that, even if one owns them, one doesn’t just carry around.
This wave of 3D, too, seems to be cresting. IMAX, the company behind many a giant-screen nature documentary and some of the better 3D projections around, announced in 2017 that its future plans include fewer 3D showings. Instead, it plans to capitalize on IMAX-ready 2D movies, such as those shot in large film formats. According to IMAX CEO Greg Foster:
Consumers in many markets are showing a clear preference […] It’s apparent that the demand for 2D film is starting to exceed that of 3D in North America, and we’ll be looking to keep more of our films in 2D as a result.7
If your preferences are similar, you can go see Ready Player One not in 3D, nor as a digital projection, but in the grand scale 70mm format that made 2001 look outstanding fifty years ago. In glorious 2D.
John Hayes, “‘You see them WITH glasses!’… A Short History of 3D Movies,” Wide Screen Movies Magazine. Last revised September 14, 2014. Esp. subsection “Decline and Fall…” http://widescreenmovies.org/WSM11/3D.htm↩
I presented two very different papers. One, concerned with the idea of the outlaw in the American West that I gave at Coimbra connected to my dissertation research on the nineteenth-century American Southwest. One on the history of the future in York was tied to my current project on the history of popular books describing and diagnosing society in Germany and the United States during the 1970s and 1980s.
Though it is concerned with markets, I found the ideas presented in this volume a very useful jumping-off point for thinking about legality and legitimacy in the American Southwest. The American West has, bewilderingly, been described at the same time as especially lawless and violent, and as relatively tame compared to the cities of the US East Coast.
Much of this is, of course, down to how one interprets sources and which statistics one deigns to trust. There is a fundamental discrepancy here, though, which to me comes down to how people perceived the place they were living in as opposed to how it compared to other places in terms of recordable data. It would not have occurred to me that this work, despite being done in close proximity to me, could so directly influence my thinking on something entirely unrelated.
As for my paper at York, at what was the most singularly insightful and fascinating conference on the broad topic of “science” and “future” I will likely attend during this Saturnian year, it benefitted from recalling a comment by Roger Launius that he was reminded of Hal Lindsey’s Late Great Planet Earth when hearing about my project. At the time, four or so years ago, I filed this away as interesting, but couldn’t quite place it. In the York paper, things had finally congealed and came together.
None of this is especially surprising, but it bears reminding oneself that research is a strange and meandering journey, and that those of us who embark on it should sit in the crow’s nest scanning the horizon, but also climb down and talk to people every once in a while.
You may have landed here because you were looking for me on the internet. If you looked for me on Facebook, you may have noticed I’m not there. I’ve deactivated my account. That’s not the same as outright deleting it. I’m not saying I’ll never be back. But I’m not there now.
Here’s the message I put on my Facebook account upon leaving, for reference:
Facebook. Ok, I’m exhausted.
There’s a new development in the Facebook data kerfuffle every day or so. This has been a long time coming. Facebook has never been forthcoming about data sharing and what exactly it’s doing, or really cared to inform us. This is nothing new, but so far it’s always been a “I’ll suck it up because everyone’s here and this is a useful platform” kind of thing. But I’m just too annoyed and tired now, and I’m deactivating my account. For a bit. Or forever.
Why am I out now? I’ve been on this platform since 2006, and it’s been fun. It’s been enlightening at times, but it’s also been a giant timesuck. I am not above a timesuck, but honestly it’s just not as much fun anymore.
I’m not turning into a luddite or lobbing sabots into machines. I’ll keep other social media accounts, some of them even with companies owned by Facebook.
You can find me on Twitter. Not that Twitter doesn’t have its own problems. But for now, it’s more useful to me. And more fun. I’m @torstenkathke there in a professional capacity, and @ictusoculi for private stuff. The line between the two is thin, admittedly.
You can find me on WhatsApp using my phone number, which is in my Facebook profile (since WhatsApp is also in the Facebook universe, you probably won’t have a hard time finding me there) and I’ll try to create a Facebook Messenger account that’s not tied to a Facebook account. Not sure how useful that’ll be, but I’ll give it a shot.
I’m on Instagram as @ictusoculi as well. There’s another Facebook property I’m not leaving. I’m not out of the world. I want to be findable and responsive. Just not here, not all the time.
I also will try to use some of the time I get back from not procrastinating on Facebook to procrastinate some blog posts on thushistory.com into existence. That’s a WordPress blog, so if you have an account with them, I’d love to see you follow me there. The Facebook pages for Thus, History! and IctusOculi will also stay up, so if you’d like to follow my activities on those sites on Facebook, follow those pages instead.
I’ll be deactivating my Facebook account. For now. Maybe there will be a time when I feel differently again, and I’ll be back. For now I just need to take a breath. I’ll leave the account up for a bit so you have a chance to see this.
I’m not slipping out quietly, and I’m not making a huge scene. I’ll just be somewhere else for a while or a year or a life. We’ll see.
Lutz Becker and Gunnar Sohn of #KönigVonDeutschland, a podcast about the many sides of, and issues related to the concept of utopia, recently spoke with me. The episode (in German) is available on iTunes and SoundCloud.