Time to walk the walk. My first poster, illustrating the use of Mediathread in my William Blake seminar at Columbia, will be on display at CTL’s Celebration of Teaching and Learning Symposium on Friday – and I’ll be nearby, inviting you to take a look. “The eye sees more than the heart knows,” after all.
As a preview, here’s a picture of the poster (which links to a PDF version that you can download and actually read) – and, below that, a short description of it.
Far too often, students studying the illuminated books produced by William Blake do so in a fractured landscape. Blake’s text is usually lifted out of its original context and standardized into typography, and this abstracted text is usually the primary focus of analysis in an English class. While students might referred to images of Blake’s original plates to retrieve a sense of the complicating interplay of image and text in his work, it is difficult for them to analyze such interplay with the specificity they bring to the close reading of text, and even more difficult to store and cite such source material in analytic frameworks.
This poster illustrates the use of Mediathread to help close this gap, allowing students to iteratively practice analysis of the interplay of text and illustration in Blake’s illuminated books. Such analysis, entailing close reading of Blake’s plates online at the Digital Blake Archive (an early and important digital humanities initiative at the University of Virginia), helped students develop a capacity to evaluate Blake’s deliberate complicating of his poetry through illustration. It also allowed them to recognize and interrogate his resistance to mechanized print production. Analyzing digitized versions of the plates, paradoxically, gave my students a much better handle on the details and implications of his deliberately archaic production process.]]>
Frank loved to talk. It was typical of him to dive into his library mid-sentence, flip through annotations, and recklessly — convincingly — raise the stakes. Was this about a website? You were actually discussing constructs of mapping, scientific rhetoric, the terms of survival. Had a new idea? Well here was a Virgilian prolepsis by heart, Latin spot-translated. He would guide your gaze to that book, say, right up over there, weave into discourse a new reference and half-assume you knew it. Often this was was flattering, implicating. Any number of academics, administrators, funders, graduate students would come stumbling out of Frank’s office with a larger idea of themselves — and it was time for the next meeting.
He excelled at sizing up people’s motivations, and knew just how to charm. A perceived lack of character or initiative, though, left Frank restless, unable to forego a swipe at surface constructs. He liked to push on any such surface, ironize with dabs of social theory or antiquity or university politics or geopolitical injustice. Often he somehow bundled these together as chords in some sort of showoff jazz. If, as occasionally happened, there could be no response, he’d pivot back to the pedestrian with a wave of his hand: “Anyway.”
It was intrinsic to his executive directorship (a title he snapped down like a trump card from time to time) that he could corral staff over and over again to talk strategy — to hazard reinventions of education, research, medicine, philanthropy, libraries with whatever insight youth, a new degree, technical ability, or sheer instinct might bring to the table. At these meetings he often urged others to take the initiative. He was a champion of agency, but found over and over that nobody would pick up the phone, connect the dots, work overtime, close the deal like he would.
I was with Frank for five years. During those years my desk was outside his office, within a tractor-beam zone several others, over the years, had fled to find peace. Because if there wasn’t a meeting, or even if there was and Frank wanted to deploy you for some reason, that office door was likely to wrench open:
Oh hey Mark, you got five minutes? You got 30 seconds? Could you come in here just a sec and take a look at this? Mark, two minutes? Mark, I want you to meet… Come in here a second, Mark, I’ve been describing your… Mark, are you busy? I’m trying to show this and I can’t get it to… Oh Mark, there you are, I was telling X that… X, I want you to meet a real humanist… X, I want you to meet a real librarian… Oh by the way Mark, I was just showing X your project and I think you should… Hey Marco what was the name of… did we ever hear back from… are you going over to… did you give X a yell yet … when will you see… oh X, before you go I need to present to you… I want you to meet… don’t get up, don’t get up… Oh hey Mark you’re still here, is everyone else gone? come in here, I want to show you … Mark, you’re a literature guy, have you seen… Oh hey Mark, you’ll enjoy this… Hey, Mark, before you go can you pound on my door I forgot my goddamn keys again… I’m going over to see X talk, are you free?… I’m heading out, you need a ride home? Hold on, Christ … I have to take this call… just a sec…be right there…
The zone was no place to enforce or even defend your own time and rhythms. Yet I learned to trust it. The zone led into teaching appointments, funded initiatives, trips across the world — good work in a new field. Frank asked for loyalty, offered protection, looked out for betrayal, practiced the improbable patience of the chronically impatient. He had a knack of looking you at you intently while flinging out an idea or reference, fixing you with deep eyes. His paradoxical energy, his love of complexities, were all right there in real time, right with you: as he talked he was taking your measure, empathizing with you, imbricating you, flattering you, challenging you, and maybe watching if you’d let him get away with it all. In this way he was not exactly paternal, not exactly the godfather.He loved and channeled insurrection. “I’m just a frightened clerk of the empire” he’d say, confronting bureaucratic restriction or hedge. Rules, limits, procedures were for those actually frightened. From his scholarly cocoon he pushed at the university, losing battles but winning or outlasting wars. He cared enough to risk, he had grown his organization through the booms and busts, he had persevered. A personal network strung across a career through several businesses and educational institutions that had cycled repeatedly through Columbia. Resources marshaled, favors traded, opportunities opened in the academy by — as he sometimes put it — the people who call New York City a town. Another of his favorite words: “orthogonal.”
In the world of educational technology, he was by turns a champion and the bull in the china shop. He would rail against “tool myopia”; his own gadgets would get dropped, lost, destroyed; his interest in, say, simulation environments or analysis platforms or videotaped activities could drift to the limits of such representation; staff presentations of projects might draw fire from him for days. Frank got things done through the continually applied force of his personality, intellect, and a well-honed sense of opportunity — not by punching a clock or following a protocol or working through code or performing any kind of archiving, updating, quality control; he knew people did all this and could nod to that, but these weren’t his mindsets. Instead he insisted his staff meet him on the ground of higher purposes, social transformations. If someone would not do this — ultimately a matter of spiritual, not intellectual, temperament — his attention moved on.
But he would be back. There was something relentless about the man, including his faith in students, and to work with him was to be his student. You both knew you were in this human situation together, the scene of instruction in a high stakes and ever-doomed world, day after day. It was always about teaching. He always had one, often two classes he was actually teaching on the side, or summer reading groups he’d form with staff, and in these he’d conduct a slow march through thick theory, savoring complexity. There were days when he’d notice me carting around Sophocles, say, en route to my own class and say: “you know, you and I, we should really team-teach the Core sometime.”
When I finally moved out of the zone into my own office, Frank was my first visitor. By then it wasn’t at all easy for him to move around, it was when he was driving down from Riverdale to campus less and less, between periods in the hospital. He embraced my new opportunity and, sitting in front of my books for once, made it ours. He’d been working out a long term vision — out came a new chart. We would continue this conversation, up in Riverdale or on the phone from Beth Israel, no matter the medicine slurring his speech, until he could hold out no longer and was gone.]]>
This year’s NMC was considerably more interesting, not least for a fundamental tension running through the conference — and extending, I’d argue, into the dual track courseware-for-the-masses ventures of our host, MIT. Does the university release modular, somewhat decontextualized elements of its learning programs onto the open web — or does it work up a comprehensive platform and run full courses online? OpenCourseWare or edX?
At issue at NMC, throughout, was the need to spur actual learning. We begin in childhood curious and playful, it was posited repeatedly (nobody used the phrase “trailing clouds of glory,” but it was in the air) — we begin piecing together the world in gladness, effortlessly making discovery after discovery, until the crush, the educational system, the system smothering organic desire to learn wholly, implacably. Numbed by standards-based tests and rote knowledge-dumps, marooned in problem sets and lectures and test routines and settings that were stale a hundred years ago, drowning in debt and cut off from employment, the hungry sheep look up and are not fed. Cue technology.
Despite ample experience to the contrary (and, to be fair, not a little experience pointing the other way too), we keep hoping that machines will make the acquisition of information easier, more effective, better. Surely if only we could tailor knowledge into the channels of computational processing, or teach machines the quirks of our natural language, or train them better on our associative behavior, or leverage them, at least, to expose what we don’t know, we would stake out some real progress.
And yet, whenever I’m in a dystopic mood, our increasing reliance on machines to “personalize” data seems profoundly isolating — cutting us off from the complexity of unchanneled (or paranodal) experience. Pursuing networks, efficiencies, we eliminate the unquantifiable — the uncanny, the unexpected, the awesome, the drives so murky that shape us so powerfully. The best stories I know about knowledge are incredibly messy, beholden to motivations and coincidences unanticipatable by artificial intelligence. See Oedipus — see everything (character, fate, chance) that is necessary for him to actually see.
The press is all over the online education story. It’s official: Harvard and MIT are hitching up to birth yet another MOOC venture — massively open online courses, such an ugly acronym — to the tune of $60 million. edX thus join Coursera, Udacity, and whatever next venture blooms by the time you read this in rolling out prestigious curriculum to the masses. It will be led by the director of MIT’s Computer Science and Artificial Intelligence lab (just as Udacity was co-founded by an AI innovator, the inventor of Google’s self-driving car and Google Glass.)
These stories in the press rarely pursue implications of AI-driven MOOCs. Among their other aspirations, MOOCs promise to be giant learning data farms, ones that loop information back into customization and management of the learning experience. The emphasis is on “massive”: wash a giant population through your system and it refines itself. Ithaka’s recently published Barriers to Adoption of Online Learning Systems in U.S. Higher Education makes the point:
By gathering data on how thousands of students progress through a common body of material, these systems should be able to help future curriculum planners optimize the sequence and design of courses and modules.
As crowds course through your system, their activity is harnessed and fed back, making the system smarter. Just as the key to making IBM’s Watson a Jeopardy champ was supplementing its giant database with just-in-time registers of human decisions, MOOCs promise to be more than a giant pile of course material: they will monitor quizzes, ratings, and discussions, they will improve themselves.
The impulse of MOOCs is no doubt altruistic: opening up the ivory tower, distributing the best teaching to the broadest possible public. But we may be many, many more failures away from understanding how that might work in practice: how the artifacts of an in-person educational institution, produced by scholars and administrators dancing to local imperatives and cut off from the full landscape of global access, translate meaningfully at scale.
Is there not, though, something poignant if not perverse about examples like this?
Online courses with thousands of students give researchers the ability to monitor students’ progress, they said, identifying what they click on and where they have trouble. Already, a researcher from the Harvard Graduate School of Education, using the M.I.T. Circuits course, found that students overwhelmingly preferred to read the handwritten notes of Professor Agarwal rather than the same notes presented on PowerPoint.
When the topic of study is mechanistic, or at least rational enough so that there are correct and incorrect choices that can be monitored and corrected by bots, these projects seem, well, natural. But when they stray into humanities…
The edX project will include not only engineering courses, in which computer grading is relatively simple, but also humanities courses, in which essays might be graded through crowd-sourcing, or assessed with natural-language software.
Here we go. The day a computer starts grading Janie’s analysis of The Great Gatsby, say, seems very close, and very dreary. Ten points for mentioning the green light in the first five sentences; escalating points for increasing proximation of “America*” and “disillusion*”; passive construction to be scoured out and flagged. Will it matter what she thinks when she’s just talking to the machine? And yes there are a thousand clever Facebookish ways to match up users of the system, a myriad of recommendation engines, any number of ways to set up pristine (because secure and erasable) interactive spaces.]]>
After much fuss about the song’s title (vs. its name, vs. what it’s called, vs. what it is), the Knight settles into a song about an encounter between an “aged aged man” and a White Knight-like interlocutor who can hardly listen to him. (John Tenniel’s illustration drafts the Mad Hatter to play the aged aged man.)
Repeatedly the reflected Knight demands to know how the aged aged man lives. In response, the genial codger gamely describes various money-making schemes (capturing sleeping butterflies for mutton-pies, selling them to sailors; setting a mountain stream on fire to produce hair products; working haddocks’ eyes into waistcoat buttons; digging for buttered rolls; liming twigs for crabs; searching grassy knolls for cab wheels) — but the Knight continually zones out. That may be because the Knight is himself caught up in a series of equally absurd but less commercial schemes (hiding green whiskers with a large fan; getting fat on batter; warding off rust on the Menai bridge (me-and-I: a bridge to nowhere) by boiling it in wine).
I particularly like how disjunction and obsession jangle about each other in the White Knight’s song: how the narrating Knight, never listening, pursues one steadfast inquiry:
“Who are you, aged man?” I said,
“and how is it you live?”
And his answer trickled through my head
Like water through a sieve.
…having no reply to give
To what the old man said,
I cried, “Come, tell me how you live!”
And thumped him on the head.
…I shook him well from side to side,
Until his face was blue:
“Come, tell me how you live,” I cried,
“And what it is you do!”
Who are you is tantamount to “how do you live,” which in turn is welded to “what is it you do.” And though nothing the aged aged man does or says seems to get through to his interviewer, or vice versa, the Knight will never forget “that old man I used to know,”
Whose look was mild, whose speech was slow,
Whose hair was whiter than the snow,
Whose face was very like a crow,
With eyes, like cinders, all aglow,
Who seemed distracted with his woe,
Who rocked his body to and fro,
And muttered mumblingly and low,
As if his mouth were full of dough,
Who snorted like a buffalo—
That summer evening, long ago,
A-sitting on a gate.’
It’s an infectious sentiment: having made little progress in understanding or relating to the White Knight, Alice also tags her encounter with him and his song as unforgettable: “Of all the strange things that Alice saw in her journey Through The Looking-Glass, this was the one that she always remembered most clearly. Years afterwards she could bring the whole scene back again…”
The whole episode is drenched with anxiety, self-consciousness, and nostalgia: it’s a world in which solipsistic, distracted agents wish mightily to connect — over the question of what each other *does*. At the heart of this discourse is the notion of identity as defined by work, and the further suggestion that work cannot really be communicated, triggering a retreat to surfaces.
***Perhaps you see where this is going. I’ve been thinking about the communicability of “work” lately, especially in the light of an interesting little meditation on the representability of a digital project posted last week by Craig Mod, an editor’s selection in Digital Humanities Now. Mod describes the challenge of representing everything it took to design Flipboard for the iPhone: “despite knowing we had been on a long journey, it didn’t feel like that journey was manifest anywhere.” The challenge to representing work, as he sees it, is to navigate from the “edgeless” realm of the digital into something that has tangible weight and shape. As Mod puts it, “There’s a feeling of thinness that I believe many of us grapple with working digitally.”
It’s an anxiety that anyone who works on digital projects can relate to — especially in an academic environment, where the value of this kind of work is uncertainly correlated, at best, to download or view or citation metrics. And the sentimental non-communication sketched out by Carroll is very much with us, I think. How much can we substantially know and talk about the true contours and efforts of each other’s digital projects? And given this fundamental challenge, how tempting is it to zone out, to think about our own private schemes of wine-boiling, even in the face of the most ardent demo of the next best thing?
We post code openly; we share project documentation; we create screencasts and sandboxes and guest access; we display and demo — it’s almost obsessive, if you think about it, all this effort to expose the work. And yet I share Mod’s craving for “a better understanding of what we’ve built and where we’ve been.” He resorts to a book.
Especially given absence of this tangible shape and worth, a standard reaction to a digital project is to ask for proof of efficacy. It’s a way of asking: whatever it is that you’ve done here, can you prove to me its worth? All very understandable, but in learning and humanistic contexts this is often a showstopper, if we’re to be honest. Imagine responding to a critical monograph, or an authoritatively edited volume, in that way. A demand for evaluation can be a way of not listening, or at sidestepping the shape and scope of the actual project.
If two thousand lines of code could be proven to produce the same effect as two million lines, would there be no difference between the projects? We can track how many times a project is downloaded or mashed or tweeted, but what does this tell us? Is aggregation of assets or users a virtue unto itself? We slide into quantification and rather crude versions of assessment, never a comfortable place for the humanities.
An absolutely ineffectual project (along whatever lines you wish to measure by) may nonetheless be very worth understanding; it may exemplify institutional relationships and workplace methodologies and asset combinations that would be fully instructive to represent. This seems especially true during these awkward days, when the basic conditions and activities of scholarship are so rapidly molting in front of our eyes.
The (mis)characterization of work is bundle-jumbled, as Carroll of course saw, with reputation, identity, and security, especially now. So many conversations about “Digital Humanities” spin into speculation about the capacity of universities to cultivate the skills and recognize the achievements of scholarship in the digital arena. And behind all this, I think, is the fundamental problem of representing the dimensions of work underpinning the projects that are or should be undertaken. Who inspires them, how do they get designed and built, who works on them, what do they contain, who uses them, who hosts them, how long are they alive, and what do they spur?
Lev Manovich’s recent work on visualizing large media collections is a thoughtful reaction to the general difficulty of comprehending the contents of digitized projects. Imagine, though, if such visualization were pursued beyond the surface — imagine the challenge of representing what it took to bring together what you see even in conceptually simple (if charmingly hypnotic) projects such as the 5930 front pages of The Hawaiian Star or 4535 Time Magazine covers or the 100 hours of the video game Kingdom Hearts (hello Wonderland!).
As captured pages or games whip past us or splay out for us along axes, as our eyes scan across an entire corpus for patterns trends and influences, we may also just barely make out the backstage ghosts of publishers, distributors, vendors, librarians, technologists, students, postdocs, gamers, maybe even a scholar or two.
Though Manovich’s Software Studies Initiative does a nice job of exposing digital humanities projects, tools, and the goals of cultural analytics, what it actually takes to do and sustain this kind of work remains, well, off the screen. If cultural analytics wishes to “better represent the complexity, diversity, variability, and uniqueness of cultural processes and artifacts,” we still wait for a parallel aspiration to visualize such projects themselves, a meta-visualization that would get behind the glass and convey a better sense of how they live.
We might take a cue from software developers, who have long grappled with collaborative work protocols and representations. A visualization tool like Gource, for example, conveys the build-out of a project as branches blooming off of a source tree. To see this in action, see Schuyler Duveen’s visualizations of several projects built at CCNMTL. Remember, as you’re watching these videos, that you’re just tracking code — not design documentation, not conceptual revisions, not the content and interactive elements of these projects.
So even if we were to let go of the lofty ambition of showing the institutional and cultural extents of a digital project — everything and everyone it really takes to make it “work” — just capturing the full extent of design, development, and implementation turns out to be a boggling endeavor. At any rate it’s clear that time and space are both necessary components for rendering the complexity of the kind of work we’re doing, as well as some schematic for conveying the choreography of a great amount of interdependencies.
Maybe such representation — and the understanding and recognition that it would engender — will be bolstered through the rise of what Henry Jenkins and his students have termed “a higher transmedia criticism”. Once we figure out how to weave strands of coherency and causation across media types, we may have developed better muscles for conveying a fuller sense of the ecosphere of a digital project.
A basic point of academic maturity in the face of the digital onslaught, I think, is to recognize the deep infrastructure (or looking-glass world?) behind what seems “merely” virtual, an infrastructure that takes us on intricate paths between modalities, institutions, and technologies. It seems to me to be no accident that early characterizations of the digital in essays like Sven Birkert’s The Gutenberg Elegies — with simplistic contrasts between print/digital like permanent/evanescent, deep/flat, sequence/signal — tended to fall away once scholars who actually worked on digital productions, such as Matt Kirshenbaum at MITH, pushed into fuller appreciations of machinery, interdependence, or “the forensic imagination,” to crib the title of Kirschenbaum’s 2008 book. Now that we’re not confusing (instant) access with (disposable) worth, what could a forensics of digital projects uncover?
Especially because I saw CHNM’s Tom Scheinfeldt talk this week about the values and tactics of THATCamp, I’m reminded that many posts on his blog have taken up the problem of Digital Humanities work. In 2008, for example, he was asking:
What happens to the increasing numbers of people employed inside university departments doing “work” not “scholarship?” In universities that have committed to digital humanities, shouldn’t the work of creating and maintaining digital collections, building software, experimenting with new user interface designs, mounting online exhibitions, providing digital resources for students and teachers, and managing the institutional teams upon which all digital humanities depend count for more than service does under traditional P&T rubrics? Personally I’m not willing to admit that this other kind of digital work is any less important for digital humanities than digital scholarship, which frankly would not be possible without it. All digital humanities is collaborative, and it’s not OK if the only people whose careers benefit from our collaborations are the “scholars” among us. We need the necessary “work” of digital humanities to count for those people whose jobs are to do it.
We can’t kid ourselves, though: this is swimming against a longstanding tide, and four years later, despite the DH hullaballoo, I’m not sure we’re anywhere closer to landing on firm ground. Though back in 2008 Scheinfeldt was heralding a “sunset for ideology, sunrise for methodology,”, anyone devoted to a digital humanities project runs into a certain recognizable chill, if not a wall: a recurring dichotomy between actual philosophers and actual workers that Jacques Rancière, for one, traces through Marx all the way back to Plato.
And this, I think, is the sharp edge of Lewis Carroll’s non-encounter: a general shirking of the work of understanding actual work. The giddily imaginative protagonists in “A-Sitting on a Gate” (or whatever it’s called) would rather conceptualize like butterflies than engage in the mechanics, methodology, or production of each other’s schemes.
Carroll’s poem, by the way, was a Jon Stewart-worthy parody of good old Bill Wordsworth, particularly the *also* provocatively odd (but much less humorous) poem “Resolution and Independence”. In Wordsworth’s poem, the interviewer and interviewee are, respectively, a young bi-polar version of the poet, and a severely aged man that he discovers at work on the moor, gathering leeches:
His body was bent double, feet and head
Coming together in life’s pilgrimage;
As if some dire constraint of pain, or rage
Of sickness felt by him in times long past,
A more than human weight upon his frame had cast.
Anxious about the worth of a poetic career, our narrator approaches this imposing and ancient working man, so old he’s hardly human (“as a huge stone.. / on the bald top of an eminence…” “Like a sea-beast crawled forth”), positively aching for connection, for moral understanding, for the experience of bonding within the terms of life on this earth. And so the narrator strikes up conversation about work:
“What occupation do you there pursue?
This is a lonesome place for one like you.”
And just like Carroll’s Knight, despite a genial reply from the occupation-pursuer, Wordsworth’s poet just cannot listen. Just as the old man starts to describe “Employment hazardous and wearisome,” the poet zones out:
The old Man still stood talking by my side;
But now his voice to me was like a stream
Scarce heard; nor word from word could I divide;
And the whole body of the Man did seem
Like one whom I had met with in a dream…
Having dematerialized the leech-gatherer into a dream, it is not long before the poet is mulling over the same egocentric problems, the misery of poets like himself.
–Perplexed, and longing to be comforted,
My question eagerly did I renew,
“How is it that you live, and what is it you do?”
You’d think this would try anyone’s patience, but the leech-gatherer seems an indulgent sort: he smiles, repeats himself, and starts to describe the extent, methodology, and waning supply of his trade. But there’s no holding our poet back from diving right back into that crazy Wordsworthian mental space, in which the mind plays tricks on itself, self-consciously, with mirrors; no matter what the old man says and repeats, no matter what his actual activities or the conditions that drive them, our daydreaming narrator is stuck “In my mind’s eye…” flipping over “thought within myself.” He climbs out of his mind only when the leech-gatherer stops talking, just in time to milk the encounter for a quick moral:
I could have laughed myself to scorn to find
In that decrepit Man so firm a mind.
A firm mind? We’ll have to take the narrator’s word for it, since words from the old man himself are inexorably snuffed out; his auditor ends up content with the surface act of encountering.
The only way I can make sense of this poem is as provokingly insufficient: another example of man’s inhumanity to man, leavened with presumption and sentiment — yet one more instance of high-mindedly ignoring the conditions, demands, and contours of actual work.]]>
So: yes, yes, and yes; chalk up this still interlude to blogger’s block, pithier observation-release venues, and — most of all — the day to day work of moving the ball in the suddenly crowded game of digital humanities.
Time was, children, that an MLA panel called “Why I (Do Not) Use Digital Resources” attracted a thin crowd indeed, just a few enthusiasts, cranks, and outliers — maybe a handful of more established academics with furrowed brows worried that they might have to worry about this stuff someday (librarians already knew they would). That time has passed, and all its aching joys are now no more, and all its dizzy raptures — the mindset of late 2003 is almost beyond recall.
Where is the world of _eight_ years past? _’T was there_–
I look for it–‘t is gone, a globe of glass!
Cracked, shivered, vanished, scarcely gazed on, ere
A silent change dissolves the glittering mass.
That’s from Byron’s Don Juan, by the way: change ’twere ever thus. You can read it in context here — if you do, beware annoying autoplay pop-ups, and take a moment to consider that presentations of canonical pre-copyright texts have not really changed these past eight years.
Anyway, I now hear graduate students invoke “digital humanities” constantly, insistently, desperately: finally, a future — finally, room for change. And though a year of not-blogging is tantamount to retirement in these fleeting days (when “change grows too changeable” — guess who), I’ve been manning the digital trenches and owe you an report.
For now, here’s a summary of my most consuming project, a multimedia analysis tool christened Mediathread some two years ago.
When it comes to shaking up learning and scholarship, a tool like Mediathread seems as promising and disruptive as, well, wikis did back at the dawn of antiquity, sometime back in Bush II’s first administration. But eight years hence…
“carpe diem,”_ Juan, _”carpe, carpe!”
To-morrow sees another race as gay
And transient, and devoured by the same harpy.
For some time RSA has been creating animations overlaying edited versions of taped lectures by the likes of Slavoj Zizek, David Harvey, Jeremy Rifkin, and Barbara Ehrenreich. It’s a clever way to disseminate ideas — the animations act as a lively accompaniment with their own gentle little dramas. Have a look, for example, of this treatment of Ken Robinson discussing changing educational paradigms:
Right? One can’t help but think that all those poor unengaged students could rouse out of their medicated torpor if only ideas were always so animated. It seems fitting that an RSA lecture would pay particular attention to the plight of children caught up in industrial death-in-life. After all, this is the same Society that solicited inventors in 1797 to come up with ways of sweeping chimneys that did not depend on little children. And that gives us more than enough occasion to look at a contemporaneous multimedia attempt to convey the plight of blighted children:
It’s quite easy now to push back on such millennial hyperventilating from a number of perspectives. Digital multitasking is distracting and dangerous; scanning, sampling, and mashing are destroying deep thought; the internet presents to children any number of emotional and physical risks. From my own perch in libraryworld, I’ve long been skeptical of concepts like “Net Generation Students,” which can lead to embarrassing institutional lunges into quickly expiring playpens, even as I applaud many of the service advances that get marshaled under such banners.
The most typical marketing is “revolutionary” — it were ever thus. Meanwhile the hungry sheep stay hungry. But now that we’re all sober and nostalgic for the good old virtues — close analysis, deep thought, transcendent expression — now that we’re virtuously skeptical about the effects of technology on real learning — I feel like pushing the other way a bit. I would never want to end up in a corner where intellectual worth was measured by detachment from the stunning shifts in communication of our day. That’s too often a stale corner, I think of it as full of Causabons, where ignorance or even fear is sanctified.
Hence, a couple of completely anecdotal observations, ala Prensky, though I’ll lay off on italics.
Even at this late date, some students wash into my classroom with a timorous attitude towards “computers.” Whether or not this is an affectation, a discourse of detachment from technology persists with some amount of vigor, even (or especially?) among “digital natives” at highly selective colleges. And yet the student so loath to do something new with computers in a course setting is tricked out — you can count on it — with a phone of some degree of smartness, an overactive Facebook account, a laptop, a digital music delivery system, and a cherished, variously organized, and promiscuously shared media library juggled between devices.
So perhaps we should set aside the easy binaries — digital native, displaced digital immigrant — and focus more on local competencies (whoops! italics!). The challenge, often, is to apply facility within one kind of digital environment to another — to bring what’s lively and engaging about community discourse in Facebook, say, into a new and different application, as defined by an instructor. Faced with a course blog (say), students are rarely starting from scratch, just as they’re rarely truly innovative users of the environment right out of the gate. They’re somewhere in the middle: endowed with some skills from their ‘other’ life, a life that can seem at once more playful and more serious than what’s going on in the classroom — skills that may or may not pertain to the effort at hand. We can’t assume that this pertinence will be discerned and exercised.
The question of local technical competence and portability thereof is a version of the larger question hovering over the classroom: what is the relationship of what’s learned here to the outside, impervious world? How can we know that classroom skills will really apply out in the field?
The good news for educators, I think, is that “digital natives” come into the room used to figuring out local rules and expectations: ready to be guided in that way. They’ve figured out how to get through so many various environments, and through a certain plasticity and perhaps even detachment (the world is full of strange games) they’ve succeeded. If playing to the “twitch speed” of this generation (a particularly unfortunate Prenskyism) leads education into the shallows, we might better address the adaptability necessarily cultivated by anyone who wants to think with or write to others today.
If “sustainability” is a touchstone du jour, the emphasis of any number of academic courses and programs, my quick claim, backed up by no data whatsoever, is that “adaptability” will be much more important to “digital natives.” When it comes to communication technology hurtling towards who knows where, no skill set is sustainable below a level of purely abstract values — and the effective persistence of such values (critical thought, intellectual honesty) pretty much depends on transference of skills between worlds. “One dead / One powerless to be born,” a burnt out “digital immigrant” might say of these worlds. “O children, what do ye reply?”]]>
Aided Eye, anyone? Small shivers of horror and wonder ran down my spine when reading today about adapted eye-tracking technology, described as “a sixth sense” by researchers presenting the proof of concept in the French Alps. According to this Discovery article, tech wizards have been concocting intimate feedback loops between GPS, customized databases, and biofeedback for a steady stream of just-in-time information.
Here’s the scenario: you’re walking down the street and looking at a location, wondering what’s there. You blink a set amount of times to get information. Trackers reading your eye’s positioning connect to GPS and a database, and a pulse of information comes streaming onto your phone — no, it wants to be closer — it comes into your ear through text-to-speech conversion.
And that’s the simple scenario. Another ‘proof of concept’: memory assistance! One of the worst questions in the world — and one of the most universal — is that on-the-spot inquiry, “Have you two met?” Once upon a time, a response of frantic blinking was mere anxiety, as the target of such inquiry calibrated an answer. But in the sunshaded future, that rapid blinking will be an Aided Eye wearer’s infrared sensor-delivered request for data from whatever Facebook’s molted into, from a “lifelog” that will recall the face and tell you what you need to carry on the conversation. (encountered 4.28.2020 23:09:03. likes sunsets and walks on the beach. trust level 7.)
Word from the Alps is that such a system can be mounted onto glasses, though technicians are struggling with how to deliver data feedback. “A tiny screen embedded inside the glasses or an audio system are both options.”
So yes the parties we’ll go to in our glinting iModos — reading other sunglassed faces with our right eye, reading data streaming back with our left. Making clever and timely observations about objects in the room, best database wins. Winking direct messages over to someone who may be smiling, or may be triggering a private replay of an archived video.
We’ll all wear our sunglasses at night — and in fact while we sleep — because you never know when you might wake up in the middle of the night and need to know something.
Data never wants you to be in the dark.]]>