Swans Commentary » swans.com January 28, 2013  

 


 

Disposing Of The Digital Humanities
 

 

by Harvey E. Whitney, Jr.

 

 

 

 

(Swans - January 28, 2013)   In this austere age of withering budget cuts to the human disciplines in the college curriculum, I've noticed new fields that have emerged in the humanities that seek to prop up the human disciplines like an amputee who has lost his legs: such fields can be subsumed under the title "digital humanities." This is a subfield of the human disciplines that seeks to analyze "phenomena" such as human coping in a "digital world" and how the digital realm has a role in "constructing knowledge." Certain relativist scholars who have an active, ideological interest in attacking foundationalist narratives of knowledge have lauded the emergence of this and similar disciplines as a way of making the humanities relevant again; additionally, such studies provide a new realm of phenomena that the humanities are exclusively fit to explore. (1) As to whether these new fields are profitable for the practitioner or really relevant has yet to be seen as individuals who make a living writing code still do far better than those who write about how people engage the tools (or fabricated phenomena) the coder produces.

I am somewhat concerned about certain assumptions of these new disciplines, not simply because of their unquestionably relativist strains, but their declaration of the human relation to the digital as a viable academic subject. There is an explicit redundancy here as coders, software engineers, and software testers already exist to study such relationships: this is perhaps the reason why new versions of software or code emerge on a continuous basis. These individuals must take into account when building software or constructing code what the end-user experience is of the software or code and use such information to revise the software or code to suit the user's interests. To take a rather plain example of this, is no mistake that, for example, more recent versions of Web browsers have been constructed to not support distracting code, such as the <marquee> tag or the <blink> tag: people who use the Web to read text are often distracted seeing moving text within a sentence or paragraph. On the flipside of this, current browsers in our Web 2.0 era more readily support Flash files because Web users have come to expect more interactive video functionality on Web pages: Web 1.0 just was not up to the task. Of course, the aesthetics of Web browsing was not lost on the business world as it saw another avenue for reaching Web users with animated ads to pimp its wares. Web 1.0, so the story goes, was for the more cerebral, literate, reflective type who would take time to read lengthy prose. In our Web 2.0 environment, that instinct has largely been extinguished: the tyranny of the image has won the day. More on this observation in a moment.

We can perhaps thank the Steve Jobs of the world for helping to make the Web browsing experience more image-friendly, for the image seems to be more readily understood than the written language. This much is certain and is largely due to our evolutionary experiences: we learned the imagistic or gestural meanings before learning the characters or alphabet of a written language to make meaning explicit.

Now why would human interactions with digital technology tempt the humanities scholar to think that he or she could do a better job than the coder or software engineer in studying human experience? The technician seems to have a better grasp of not only the power and limitations of code or software but also of the practical aims of the user (i.e., what the user wants to accomplish).

The humanities scholar who dabbles in digital studies often will invoke abstractions such as "digital world" to talk about the relationships between user and software or users whose communication among themselves is mediated by software. But we have to be careful about characterizing relationships as things or as "real." As a pragmatist, I generally don't think that full blown arguments over what constitutes the real and unreal are meaningful arguments to begin with because ultimately such arguments rest upon what set of metaphysical criteria we prefer over others and what attributes of the phenomena fit or don't fit those criteria. The metaphysician, whether the humanities scholar or digital culture generalist, is skilled at contriving tidy categorical boxes into which things neatly fit or don't fit, which might be good for explanation but experience does not always have such a binary, dualistic character. While we can perform certain tasks in this so-called digital world, such as do our banking or wish a friend a good birthday, there are certain limitations. We can't, for example, go fly fishing in the digital world or send astronauts from the digital world into outer space. Cyberspace has no spatial attributes and therefore cannot be occupied by physical objects; for there, all is surface and no depth. We can't and don't exist there.

Additionally, the proponents of the field of digital humanities have latched on to, as an object of study, a prominent feature of Web 2.0: social media. Unlike Web 1.0, Web 2.0 allows people to interact almost simultaneously. This, of course, fits in with criticisms of Web 1.0 as a relatively static medium in which information to exchange took more time. The emergence of social media is significant according to proponents of the digital humanities because they allow for not only human interaction in real time but can almost immediately shape human behavior that often has moral consequences. We can thus grant, for example, the existence of new morally repugnant phenomena, such as cyberbullying or online interactions between pedophiles and children, which actually do have "real world" consequences. Social media, therefore, say the proponents of the digital humanities, not only provides a new horizon in which to examine values and ethics but also a way for stamping out evil, which always threatens to unravel fabric of the social order.

I am not sure if we need to consult scholars of the human disciplines about the moral correctness or incorrectness of online interactions that can result in real world anti-social behavior: for the human disciplines do not seem to have, whether historically or exclusively, a special insight for determining the moral value of an act. Many of the great scholars of the human disciplines, such as Frederich Nietzsche or Immanuel Kant, actively rejected the idea of objective moral values (the former), justified social segregation based upon race (the latter), or justified misogyny (both); and while it may be said that such figures were the products of less socially progressive times, why should we think that contemporary academics in the humanities disciplines express more morally grounded attitudes? Society already has a functioning gauge for determining what is morally good or repugnant, and when there is a lack of consensus (for example, on the issue of abortion), society can deliberate and further refine its ability to build consensus. This mechanism is far from perfect but morality is not always clear cut: especially when recognize the force emotion plays in shaping moral opinion.

I purposely brought up the distinction between Web 1.0 and 2.0 because it is a very interesting dichotomy, although I want to be careful about making some of the same metaphysical preferences that I discouraged above. While Web 2.0 has more interactive image and video support for a multitude of video or image formats than Web 1.0 did, I would like to discourage the notion that Web 1.0 had more reflective or more deliberative interactions than Web 2.0 does. Those who promote this distinction will often point to the fact that within social media, conversations between individuals are much shorter, and that Web 2.0 almost has a much more extensive iconography for emotive expression (i.e., more emoticons) and that popular Web 2.0 platforms, such as Twitter or Facebook, tend to discourage drawn out, deliberative, analytical, or even poetic expression. There is a character or word limit for example when posting a Tweet or Facebook status update: something that would be unheard of in the more static, more reflective days of Web 1.0.

This is a somewhat erroneous dichotomy that can simply be countered with empirical example. Reflectiveness or the lack of reflectiveness is determined by the company you keep. Internet conversations I have between old graduate school colleagues and former faculty mentors tend to be more reflective than conversations I might have with an old party buddy or relative who never went to college. So we cannot generalize the overall social character of Web 1.0 or 2.0 simply because we are merely describing a subset of relationships (i.e., our own) as opposed to the sum total of relationships online: a total that is indefinite. Therein lies the fundamental flaw of the digital humanities or academic internet evangelists who view digital relationships as a new set of "phenomena" worthy of a systematic study: we can only speak of the character of online interaction only by examining a subset of those interactions, a familiar flaw that we recognize in every poll or survey looking to describe a population based upon a subset of the population.

 

To e-mail this article

 

· · · · · ·

 

Please consider making a donation to Swans.

· · · · · ·

 

Legalese

Feel free to insert a link to this work on your Web site or to disseminate its URL on your favorite lists, quoting the first paragraph or providing a summary. However, DO NOT steal, scavenge, or repost this work on the Web or any electronic media. Inlining, mirroring, and framing are expressly prohibited. Pulp re-publishing is welcome -- please contact the publisher. This material is copyrighted, © Harvey E. Whitney, Jr. 2013. All rights reserved.

 

Have your say

Do you wish to share your opinion? We invite your comments. E-mail the Editor. Please include your full name, address and phone number (the city, state/country where you reside is paramount information). When/if we publish your opinion we will only include your name, city, state, and country.

 

About the Author

Harvey E. Whitney, Jr. is a doctoral candidate in history at Florida State University and teaches medieval and modern global history at Howard Community College in Maryland. To learn more, please visit his Web site at http://hewhitney.com/.   (back)

 

· · · · · ·

 

Notes

1.  Stanley Fish, a veteran scholar of the 1980s academic culture and literary canon wars, has lately been waving the banner of the usefulness of the digital humanities these days in The New York Times. There is a new academic journal, Digital Humanities Quarterly, that has its sights on establishing the digital humanities as a permanent field within academia. Like weeds, such fields are slowly but surely sprouting in a number of English departments in universities, such as Marylhurst University, Indiana University of Pennsylvania, City University of New York, Pennsylvania State University, and countless others. A good primer on the debates within the digital humanities can be found at this link http://www.the-tls.co.uk/tls/public/article1099163.ece. Issues such as the priority and viability of scholarly articles published online instead of an established, physical, academic journal as well as the democratizing nature of chat boards and social media are other subjects which have attracted scholarly interest in the field.  (back)

 

· · · · · ·

 

Internal Resources

America the 'beautiful'

Patterns which Connect

· · · · · ·

 

This edition's other articles

Check the front page, where all current articles are listed.

 

Also...

Check our past editions, where the past remains very present.


· · · · · ·

 

[About]-[Past Issues]-[Archives]-[Resources]-[Copyright]

 

 

Swans -- ISSN: 1554-4915
URL for this work: http://www.swans.com/library/art19/hewhit24.html
Published January 28, 2013



THE COMPANION OF THINKING PEOPLE