Category Archives: Reading Blog

Week Thirteen Reading Blog: Are Digital Natives Just Immigrants in Disguise?

Danah Boyd’s skepticism about — and deconstruction of — the terms “digital native” and “digital immigrant”  have done much to strip away the false veneer attending those particularly loaded phrases. I’ve always been skeptical of these characterizations, especially after witnessing so many members of my own generation engage in what one might call “digital innovation.”  Many of the most useful digital tools and software we enjoy today have come from  people like Steve Jobs, Bill Gates, Roy Rosenzweig, and others who grappled in adulthood with the promises that technological advances offered society and the humanities at large.

For me, the distinction between “native” and “immigrant” is not simply generational but instead rests in terms of adeptness of use. Younger people are more patient, more adept users of new media and digital technology while those of us born in the 1950s, 1960s, and early 1970s are less likely to embrace it. Only when the world at large “tests” that new technology and deems it appropriate (and often essential) for everyday use do us “old fogies” embrace those new digital tools. Granted, younger people born in the 1980s and 1990s are not just users; they are also producing the innovative, graphically stunning video games that have flooded the market. But these younger generations don’t quite seem to “own” the market as fully as I had previously imagined; skill with a computer does not equal wisdom with a computer.  We label these younger folks as “digital natives,” but they certainly don’t have all the answers. I’ve always considered my son (born in 1985) to be one of those stereotypical “digital natives.” Thus, I routinely deferred to his digital skill in navigating the Internet responsibly while I faithfully fulfilled the role of “digital bumbler.”

But my hands-off attitude changed over time when I began to oversee and question how he was using the Internet and digital tools. I quickly learned that he just jumped on innovations like Facebook and Twitter (and some very violent online games) without questioning what they might or might not do to his online reputation — or to his psyche! His judgment in these cases was not always sound. My initial disinterest in supervising his use of these online tools stemmed from my deference to him as a “digital native.”  In other words, I told myself that “he knows better than I do the dangers inherent in these online digital tools.” Not so. He got himself into a pickle on more than one occasion, and I had to help pull him out.  In other words, he was an excellent “user” of digital tools, but he did so reflexively and without much forethought.

Thus, Danah Boyd is on target when she claims that the rhetoric attending these terms is not just inaccurate, it’s dangerous.  The “networked world” she mentions is fraught with politicized language and trapdoors that can snare the unsuspecting user — both “native” and “immigrant” alike (Boyd, 197). An inclination to use a digital tool or platform, as I have come to view “digital natives,”  does not automatically make one a discriminating user. Thus, older generations of less adept but interested users (like me) still have a role in guiding the generations growing up with this technology to use it properly and to recognize its potential pitfalls.

We must also keep in mind that many members of the younger generations are turned off to technology due to the numerous pitfalls they have experienced or that await them, such as the posting of unflattering images, online bullying,  false information, and the like. Perhaps these online dangers explain why Allison C. Marsh’s museum studies students (supposedly “digital natives”) demonstrated “little interest in the digital world” (Marsh, 279). Her students’ further inability to use a simple program like Omeka to build an online exhibit that flowed logically proved equally disturbing to her. Granted, innovative experiments such as T. Mills Kelly’s “historical hoax” class can help draw reluctant students from the “digital native” generations into a more discerning and responsible use of the Web and other digital tools. However,  such an approach is a two-edged sword, primarily because the thrill attending such a hoaxing exercise can result in students who later become “serial Web hoaxers.” In other words, it could create a whole new category of user — the “digital monster.”

What I find truly puzzling, though, is how none of the professors I have had at GMU have employed any digital innovation whatsoever in any of their classes. Of my 10 courses for the Master’s Program, only one course actually demonstrated (and for one class session only) how to use the Web to find online primary sources. Specifically, we spent the class  with a librarian explaining to us how we could find newspapers online using Pro-Quest — and that was it.  I find this state of affairs remarkably ironic given Daniel J. Cohen’s admission that digital history is what prompted the Virginia State Council of Higher Education to approve GMU’s PhD in History program as the “PhD with a Difference.”  The only class I have taken that involved anything digital is this one — HIST 696, Clio-Wired — and only as part of the PhD program. If GMU is the standard-bearer of digital history, why are the History Department’s faculty members not on board? Of my 10 classes for the Master’s Degree program, the traditional model of sitting in a circle to discuss the “monograph of the week” dominated the teaching approach. The digital tools available today allow for excellent visualizations and immersive experiences. I would have loved to have seen some PowerPoint slides with images from the period or even audio or video clips from documentaries and the like to supplement the in-class learning experience.  These are cheap and easy tools to use. I run an academic institution for the Army; and, strapped as we are for cash thanks to a gridlocked Congress, we still employ a variety of digital and audio-visual tools to enhance learning, most of which cost nothing. If we are to undergo a true “digital turn” in academic history, then our history professors should set the example, especially in the very institution that prides itself on leading the digital charge.

Steve Rusiecki

 

 

 

Week Twelve Reading Blog: Should History Become a Virtual Commune?

As a historian who has already generated for public consumption two books and a few articles, I have some very stark opinions about when open access of someone’s intellectual property becomes a virtue or a liability.  First, I am a firm believer in, and avid supporter of, the Budapest, Bethesda, and Berlin open-access initiatives described by Peter Suber.  I support the Creative Commons initiative as well. However, I am also extremely sensitive of the need to protect a scholar’s (or, more specifically, a historian’s) intellectual property from untimely, unauthorized proliferation and, frankly, hijacking by others.  Thus, the AHA’s embargoing guidelines for dissertations makes good sense to me. Ultimately, some practical factors enter into the equation very quickly when someone has “skin in the game” in the form of a published scholarly work, particularly when that work may represent a lifetime’s investment in travel, research, and intellectual creativity. Dan Cohen is correct when he states that historians, whose non-fiction works are most likely to encounter difficulties in terms of fair-use legalities,  need to apply more effort and brain power to figuring out how to straddle more effectively the open-access and intellectual protections fence.

Cohen and Roy Rosenzweig, in their chapter in Digital History (2006) titled “Owning the Past,” are absolutely on target when they explain that existing “copyright law does nothing to protect ideas, only their formal and fixed expression” (Cohen and Rosenzweig, 200).  Yet despite this statement, Cohen and Rosenzweig still advocate strongly for  open access in the proliferation of “informal” and “unfixed” ideas, a position that I find troubling, even though they advocate for the Creative Commons licenses. Published works, as both scholars admit, clearly enjoy more legal protection and are less prone to intellectual hijacking because the knowledge they contain has already been formally presented to the reading public and to a host of peer reviewers.  In other words, stealing ideas from a published work can result in potentially successful legal action and strong policing by the public at large.  For the most part, the reasonable person standard applies : no one likes a thief or a copycat and will normally “call out” such transgressors. Recent notable examples include Doris Kearns Goodwin and Stephen Ambrose, both of whom suffered professional embarrassment when caught red-handed quoting from published works without proper documentation.  By contrast, unpublished ideas sent streaming through the Internet under the auspices of open peer review or through blogging are likely to invite intellectual “thieves” who prey on that type of unfettered, ‘un-policed’  access.  Granted, the Creative Commons licenses offer a potential remedy for any and all online ideas, but I am not yet sold on the viability of such licenses. Have these licenses ever stood up in a court of law?

The scholars most at risk are PhD candidates, who spend years digging for untouched primary sources in order to fill a scholarly gap that, in itself, may have been very difficult to identify. Thus, I fully support the AHA’s guidelines for allowing students to embargo their dissertations for up to six years. This option gives them a fighting chance to get their finalized out work “out there” more formally and to enjoy firmer copyright protections.  Once a book is published, the knowledge and the ideas are now public. Even Judge Denny Chen’s ruling for Google underscores the importance of these protections. Chen recognized that the open-access efforts by Google of published works, works which already enjoyed copyright protections, provided “significant public benefits”  for advancing “the progress of the arts and sciences,” all without adversely impacting the rights of copyright holders” (Chen, Case 1:05-cv-08136-DC, 26) .  In short, Chen argued that these works were already protected and that their open-access proliferation on the Web was a benefit, not a liability, to the copyright-holder.  I agree fully. As the copyright-holder of two books, and someone protected by these same laws, I have no trepidation about Google making my books part of their open-access project. I wrote the books to share the knowledge, not to hide it. But I would never allow any of my unpublished works to be  disseminated in such a way, even with a Creative Commons license,  because I know, and have experienced, others preying on the ideas of fellow scholars.  Unpublished work has the potential to invite and not deter predators.

The other big problem in spreading ideas  openly without proprietary protections is the fact that some academic publishers operating on tenuous business models will be less inclined to publish ideas that have already made the rounds. Rebecca Anne Goetz even described that in 2005, she was led to believe that blogging her ideas online would likely hurt her career prospects.   More to the point, William Cronon’s statement that “several” editors from distinguished presses told him that sharing ideas online would affect publication decisions should send shivers down the spines of all PhD students.  Maybe, as Adam Crymble suggests, the real problem is over-reliance on the book form itself. But, until History Departments nationwide embark upon a revolution that redefines  (or re-imagines) how historians can present their ideas publicly  with clear protections while still meeting tenure-track requirements,  the book is here to stay.

Overall, open access is a great thing. As Judge Chen stated in his opinion, it gives new life to old books and old ideas.  But all ideas need some form of protection before they end up on the open circuit. Thus, the published form offers the most safety to a scholar, particularly because fair-use laws exist on such a sliding scale of interpretation and application.  Copyright laws at least serve as a bulwark against full-scale rip-offs.  I think all of us historians want our work out there, and we would be happy to see older scholarship engage more with newer scholarship. But none of these things should happen until the scholar is ready to provide that knowledge to the world in a formal, polished manner.  The digital, online history world is the new Wild West frontier of academia, and every scholar needs to be armed with a copyrighted “six-shooter” to avoid being exploited or to prevent a lifetime’s worth of work becoming some lazy schmuck’s magnum opus.

Steve Rusiecki

Week Eleven Reading Blog: Does Advancing Digital Scholarship Mean that the Book has to Die?

The “book is dead.” This rather stunning statement by Tim Hitchcock in his Journal of Digital Humanities article titled “Academic History Writing and its Disconnects” did not sit well with me.  As a lead-in argument to a host of online readings in support of born-online digital scholarship and open peer review, Hitchcock’s assertion seemed oddly out of place and rankled me to no end. Granted, the “digital turn” is akin to an industrial-revolution-style sea change in how historians research, write, and present history, and we have an obligation to get on board — like it or not. However, the book, a roughly 1,200-year-old means of communicating causal and critical thinking in nearly all languages, still has relevance. Why does the advent of digital scholarship presume the death of a world-wide literary form that has proven effective for centuries and continues to do so in the present?  Digital scholarship can and should supplement the book form, be it digital or in hard copy, in as many ways as possible without shutting down the most effective means in human history of communicating ideas and arguments. I have yet to visit a digital-history Web site that can advance effectively and concisely, using randomized hyperlinks, visualizations, and the like, an argument that demonstrates critical and causal thinking.  William G. Thomas III mused openly about these challenges in posing questions about, and providing some recommended solutions for, the challenges inherent in presenting a cogent argument  in the form of a born-digital online article. Thus, in my mind, rumors of the book’s death as a form of advancing ideas and arguments have been greatly exaggerated.

Many of the authors in this week’s readings advocate very strongly for open peer review and the academy’s recognition of digital history as a legitimate form of historical scholarship. These arguments resonate strongly with me, because so many of these new digital tools, tools that take most of us (especially me!) outside our comfort zones, can clearly do so much to advance our knowledge of the past, particularly in the visual realm.  In particular, the “social interaction” stemming from online peer reviews as discussed by Alex Sayf Cummings and Jonathan Jarrett is a great enhancement to the often close-minded peer-review processes that scholars face today.  Most importantly, these technological advances allow  historians to reach a wider audience online, especially for those scholars and general readers who feel that by commenting on a draft digital article, they can help advance the argument and thus have some “skin in the game” — somewhat like the folks who contribute to Wikipedia but in a more substantive way. The success enjoyed by Melissa Terras’s use of social media to publicize her journal articles is a case in point. The spike in downloads after blogging about her articles was amazing and a testimony to her ingenuity.

Open peer review as a working concept has a great deal of merit. Even blogging and its attendant informality can do more to advance an article’s argument than the closed-off, formalized peer reviews that most journal articles employ as scholarly filters today. I have never been a fan of anonymous peer reviews, mainly because they allow the nameless reviewers to take cheap shots or to grandstand their own theories at the expense of another person’s scholarship.  I have published two books with peer-reviewed academic presses and one journal article, and I found the peer-review processes in both cases to be useful in some instances and just plain aggravating in others. The most useful feedback emphasized tangible ways to clarify an argument or a certain point in the body of the work. The least useful feedback came in the form of something like “why your book or article should conform more to my own scholarship in that same area.” Thankfully, most of my editors were always quick to see through those “grandstanders” and dismiss their criticisms as “off the mark.” But those anonymous comments always left me feeling that some fellow scholars tended to behave too jealously or guardedly toward new scholarship in their respective fields. In one case, I pulled an article from a well-known, peer-reviewed journal because the feedback by one reviewer was unbelievably mean-spirited and grossly unfounded. The feedback had almost nothing to do with the subject of my article but everything to do with that reviewer’s own ideas about a tangential aspect of the topic.  Ultimately, I withdrew the article when the editor told me that I should “consider” revising the article completely (!) to satisfy that particular unnamed reviewer’s  comments. Clearly, the editor was afraid to run afoul of that reviewer; and, as a result, I walked. Thanks, but no thanks. And because the reviewer was anonymous, I could not interact with him or her to learn the true motives behind the comments. Thus, I  tend to agree with Tim Hitchcock’s implied message that peer-filtered journals and books, by virtue of the very formats they employ and that they protect ruthlessly, can represent a type of “fascist authority” unto themselves. Open, attributed, online peer reviews, if patrolled properly, can be very liberating and eminently useful. And the historian has the ability to interact with the reviewer — a big plus for me.

Although I am clearly advocating for open scholarship online, I am most concerned with the proprietary aspects of one’s work and its overall permanence online.  Edward Ayers is correct that producing digital scholarship is a risky venture in the online world of hackers and cyber-attackers. Since I have two published books on the shelves that represent decades of research, travel, and writing (and lots of dollars tied up in those efforts), I feel less inclined to risk losing my proprietary rights to that scholarship simply to fulfill an altruistic impulse to get on board with the digital world and make everything available online. Frankly, a well-packaged, copyrighted monograph produced by an academic publishing house is akin (in my mind) to a safety deposit box for one’s scholarship. The publisher copyrights it for the author, obtains an ISBN number, catalogs it with the Library of Congress, and so on. Thus, my rights as an author and the owner of intellectual property are, at least in theory, secure — even with the one version of my book that exists digitally for Kindle users.  And a hard-copy version on the shelves further gives me that sense of permanence, a fact that Cummings and Jarrett underscore when they write that nothing is “safely online in the long-term.” Even their assurances that not much is lost, either, don’t make me feel much better.  While William Thomas sees virtue in exposing the inner workings of one’s scholarship online, I see some risk. I prefer that a strong first draft appear online for peer review after the historian has done much of his or her work offline; and, likewise, I prefer a way to upload a finished product somewhat permanently, well after the online peer reviews have helped generate a final product. If only scraps and bits of hard-won research information appear online without some safeguarding of the scholarship, then Ayers’s fears will be realized: historians won’t risk it. Thus, I think that Alex Galarza and company’s successful 2012 appeal to the American Historical Association for some guidelines regarding the value of, and attribution for, digital scholarship in the context of both the Academy and for tenure-track credit is extremely important.

But in spite of my eagerness for online peer review and digital history more broadly, I think that the book as a form through which to communicate ideas and arguments does not need to disappear into the stratosphere.  Books can simply take on new forms in a digital world that allows them to expand their impact and ability to reach — and draw in  — a much larger and interested audience. Electronic (or digital) books like those developed for Kindle readers are a good start, but the basic linear form of organizing an argument and presenting it with all its attendant evidence has no substitute — at least for the moment. I’m open to new ways of re-imagining the book in the digital world, but I think we have to use the book format as the basic building block of any attempt to present an original historical argument online. If we don’t teach new historians how to use that form now, how can they hope to make sense of the thousands of tomes that exist in the world now and upon which these very same historians will have to rely as secondary sources well into the future?

Steve Rusiecki

Week Ten Reading Blog: Gaming the System

“Our consciousness of the past is inextricably bound by pictures.”  I concur fully with Joshua Brown’s assertion about the inherent visual quality of history and the impact that historical visualization has on our understanding of the past.  In my experience, the power of history not only rests in well-crafted prose that critically examines historical themes and events but also in one’s ability to experience that history visually and through other senses.  Thus, I appreciate fully the digitally immersive, highly visual quality of Brown’s The Lost Museum Web site and his historiographical argument for why and how people of the past tended to resist such visual innovations in the media of their day.  That same resistance seems ever-present today as we negotiate the new era of the “digital turn.” Brown’s argument is most convincing to me, though, in the context of understanding the inherent resistance people often bring to new visualizations of the past, but I don’t see how his argument supports the core subject of this week’s readings, Gaming, or addresses my own skepticism about this form of digitally visual history.

The article by Laura Zucconi, Ethan Watral, Hannah Ueno, and Lisa Rosner about the development of their interactive historical game, Pox and the City, gets to the heart of my concerns regarding digital games as a viable means of teaching the meaning behind historical events.  Granted, as the authors assert, games “work best when they are visually stimulating,” but the visualizations and attendant algorithms that players must negotiate don’t seem to me to be the best way to do history — at least history that has any true meaning. My deeply entrenched skepticism on this topic stems from watching over time my own son’s addictive interplay with video and digital games starting in the late 1980s. He played a variety of historically themed games that I found to be amusing and, in some cases, quite accurate in terms of period dress, equipment, and architecture. Specific games that I recall him playing were computer-based versions called Civilization (the same one by Sid Meier that Adam Chapman analyzes, I believe) and The Sims or console-based fantasy games with historical components, such as Zelda.  But he always seemed most interested in mastering the algorithms that allowed him to proceed to the next level, a behavior in keeping with Zucconi and company’s statement that “games work best when they are open-ended, allowing players a set of choices without pre-determined outcomes.”  But in seeking to master these algorithms, my son often made anachronistic choices that essentially created counter-factual history, “a kind of historical fiction rather than historical fact,” as the authors readily admit. When I told my son that certain choices he made digitally would not have been viable or possible in the game’s actual historical setting, he simply stated, “Well, that’s how it is here.” Thus, I was never quite certain that he learned the proper lessons from his digital immersion in these historical games. I would like to believe that he took away some important themes from these games, such as social lessons on how people lived in the past and the limitations of their existence. But, in the main, he seemed to treat these historical games more as problem-solving tutorials (not really a bad thing) rather than as instructive historical mediums. And, worst of all, he grew up hating history. Go figure.

In many ways, the games my son played in the past and contemporary examples such as Pox and City have more in common with modeling simulations like Elijah Meeks’s and Karl Grossner’s ORBIS than they do with Brown’s The Lost Museum.  Simulations can help us predict what might happen using historical data as the basis for their construction. Thus, they have their own utility in this regard. But I think that players can get something more out of  games if the scenarios are coded to avoid or eliminate anachronistic choices by the players; in other words, if the available algorithm steers them toward proper (my code word for “realistic”) historical situations, then the game can have greater value.

Adam Chapman’s approach is probably to best way to assess the historical utility of these games. He contends that if we are to understand video games as a digital “form” that conveys a type of historical meaning rather than as one that captures “content” with historical fidelity, then we must examine them in the context of the video-game medium itself.  Like film, as Chapman explains, games can “function as a mode of historical expression” if we choose to view them in the context of the medium within which they exist and, perhaps, as their own analytical category.  I agree with Chapman that these video games “are, like all histories, mimetic cultural products”; but, like all things that people construct, even historical monographs, they have to be considered effective only in the context of their capabilities and limitations. But I depart with Chapman in his contention that a game like Sid Meier’s Civilizationis history [emphasis added] [simply] because it is a text that allows playful engagement with, connects to and produces discourse about the past.”  I agree that playing can equal learning in some settings, but when someone “flips the mental switch” from the learning mode to the entertainment mode, I think more is lost than gained. And, yes, I concede Trevor Owens’s point that games reach a broader audience. However, I disagree with Owens that games are “particularly good” at “articulating causal models for why something turned out the way it did.” Those causal models are most likely carefully programmed algorithms that may not make sense in the real world.

Ultimately, I think games can be one way to allow an interested public to explore history in both a visual and an immersive sense — as long as we supplement those games with other effective forms of historical discourse, such as interactive maps; qualitative and quantitative models; high-resolution photographs and lithographs; and, yes, dare I say it, monographs. I see more power in many forms working in unison with each other rather than privileging one form, such as games, over all others.

Steve Rusiecki

Week Nine Reading Blog: Crowdsourcing Might Get a Little Cramped

Of all the readings we’ve done this semester, I’m most ambivalent about the topic at hand, Crowdsourcing History, and its potential impact, both positive and negative, on the quality of digital history.  In my more than 30 years in the Army, I’ve had to conform to a very useful, productive approach to problem-solving and product development, in which teamwork  and collaboration proved essential in most cases. But I’ve also seen aberrations of this same approach, which I have pejoratively deemed “the group-think,” scuttle many efforts and produce less-than-optimum outcomes. In other words, too many “fingers in the pot,” and too many “chiefs” attached to those fingers, have resulted in potentially excellent results watered down to mediocrity in order to appease some larger group. In turn, these products, when presented to senior leaders for a decision, often obscured self-imposed disadvantages in favor of the fact that everyone had a hand in its creation and “could live” with the final outcome. I don’t want to belittle the power of collaboration in achieving excellent results, but I also want to ensure that the outcomes produced by singular personalities adhering to equally rigid standards don’t get kicked to the curb in favor of a particular methodology for producing digital history.

Roy Rosenzweig’s article on Wikipedia cuts to the core of my concerns, despite his enthusiasm for a medium that allows for the construction of historical knowledge based on a lot of “fingers in the pot.”  Although I find Wikipedia to be a useful tool, and I fully support ‘democratized’ history online, I don’t share Rosenzweig’s complete enthusiasm for the site.  The question that haunts me most is: If anyone can add to or revise historical entries on Wikipedia with the same  authority as a trained historian, then why have trained historians? Rosenzweig specifically takes aim at the fact that current historical scholarship is “characterized by possessive individualism” (Rosenzweig, 117).  Frankly, I don’t see a problem with a historian whose life’s work is defined by a specific historical topic, genre, or event. Without experts in specific subjects, who can police a crowdsourced site like Wikipedia effectively in order to strip out the errors and blatant misinterpretations?  Rosenzweig’s contention that most facts on Wikipedia, according to his own sampling, are generally correct and that Wikipedia’s real virtue rests in its capability for public revision are good points.   And yes, Wikipedia is an online encyclopedia that is not intended to replace individual historical scholarship; in fact,  I think that Wikipedia’s prohibition on original historical scholarship making its way onto the site is a pipe dream. I’ve seen it happening already on certain topic entries for which I have specific expertise, such as the Battle of the Bulge during World War II. Overall, I think we need to view how collaborative historical scholarship occurs online in a very specific way. We still need the “possessive individualism” that comes with a scholar taking ownership for his or her historical work and then using that special background to help ensure that others “get it right” online — or at least to make sure that those who do add their two cents do so in an informed manner.  We need that individual historical expertise out there to advocate both for the academy and for democratizing history online in order to sustain the balance necessary to doing good history, regardless of the medium.

But one theme surfaced in the readings regarding Wikipedia that left me flatfooted: the demographics of the site’s contributors. Perhaps the most puzzling aspect of Wikipedia’s participatory demographic is the fact that it contains mostly educated white men from the West. According to Leslie Madsen-Brooks, a 2011 study identified only 8.5 percent of the site’s editors as women. By contrast, she states, most users of genealogical Web sites (around 65 percent)are women.  These statistics puzzle me. I’m not sure why such a gender gap exists on these two sites. Do these gaps suggest that American (or perhaps Western) society have quietly, through overt practice,  accepted strictly defined gender roles when preserving some forms of history, such as women as the keepers of the family history? Does Wikipedia attract more men because men feel that they want to safeguard a historical record that perpetuates a certain gender-defined image of themselves — for personal or political reasons? Madsen-Brooks doesn’t seem to have an answer, and neither do I. Frankly, this whole discussion surprised me and has left me hanging. I’m stumped.

I am most enthusiastic, though, about the kind of “crowdsourcing” that Trevor Owens, Tim Causer, Justin Tonra, and Valerie Wallace describe. The fastest route to democratizing history through the digital medium is to get the primary sources out there on the Web as effectively, accurately, and efficiently as possible. I think the efforts to involve the public in this enterprise are on target, particularly in the case of the Jeremy Bentham archive. Causer and company, while outlining the pitfalls and frustrations inherent in getting amateurs and volunteers to transcribe primary sources into digitized text, are employing perhaps the best method for making the Bentham archive available. I happen to own hundreds of original Civil War letters (my hobby is collecting original World War II and Civil War documents), and I’ve transcribed all of them into Word documents. The process is tedious and aggravating, especially when the handwriting is poor or the text is faded.  I routinely had a fellow historian check my work to make corrections or to reinterpret some of the more difficult handwriting issues. But the fact that I had already done the “heavy lifting” by transcribing the bulk of the document with reasonable accuracy allowed him to concentrate on  those finer points requiring correction.

Granted, as Causer and company point out, the quality-control process did not really “[quicken] the pace of transcription” (Causer et al, 130). But what’s more important? Getting it right the first time, or getting it quickly? My vote is for getting it right the first time. Sheila A. Brennan and T. Mills Kelly faced this fact when they realized that, much to their chagrin, they had to allow time for Hurricane Katrina survivors to heal emotionally for a year or two before they (the survivors) could provide testimony that was of a high quality and useful. Thus, fast is not always good,  and “crowdsourcing,”  while effective, may not equal speed of output. Most importantly,  getting the public to help in producing digitized archives not only empowers the average person with the ability to help preserve for all time the intellectual fruits of his or her past, but it also helps to “democratize” history by making members of the public ‘intimate” with the primary sources. What better way to energize the public’s imagination and awareness of history than through the collective preservation of its treasured sources? I’m all-in for this type of crowdsourcing effort.

Steve Rusiecki

Week Eight Reading Blog: The Digital Face of Public History

The status of public (or popular) history when compared to academic history has always intrigued me.  At what point does the academy accept history generated by the non-academician?  Or, for that matter, history produced for a non-academic audience? I think the question posed by Carl Smith in his 1998 article “Can You Do Serious History on the Web?” cuts to the heart of this debate.   If the Web is open to all comers, then Smith’s question suggests that such a rift between  academic, or “serious,” history exists. Thus, if “serious” history is academic, as Smith implies, then public or popular history is the exact opposite — “unserious.” Given what the readings for this week have described, I think history developed for, and presented on, the Web for a broader public can be just as (if not more) “serious” than what the academy produces.  And, in many ways, that Web-based history can touch many more lives and influence the present more dramatically than if historical debates remained the exclusive domain of the academy’s so-called “ivory tower.”

For me, history serves a purpose. I craft the military history I write for a specific audience — soldiers and, yes, the general public. The experiential lessons gleaned from past conflicts consistently inform our application of military power in the present and future.  Moreover, the public’s understanding of warfare further develops a broader appreciation for the sacrifices men and women have made on behalf of the nation over time. But in a broader sense, history helps us stake out a way ahead and, ideally, prevents us from repeating the same mistakes over and over again.  Thus, only by making history more universal can we hope to fulfill such an ambitious charter. Thankfully, the rapidly developing digital world is pushing us inextricably toward this very goal.

Carl Smith’s discussion about the online project he curated, The Great Chicago Fire and the Web of Memory, provides an excellent example of an academically  managed, digitally constructed site of historical knowledge targeted toward a wider, non-academic audience. The beauty of this public Web site is that it remains visually immersive without losing the  authority of the trained historian.  The site’s various snippets of carefully crafted, tightly packaged  historical narratives, all based on reams of primary-source material, material that the average user can also browse and evaluate,  lends a remarkable power to the site. And yes, the narrative is there if you want to follow it (an important feature for me in particular). I appreciate the fact that Smith recognized the limitless capacity of his digital medium; he included and then managed over 300 different pages on the site, each with its own wealth of material in the form of facsimile representations of original documents, lithographs, and photographs.  The analog world would never let him get away with something so cost-intensive.  In my view, Smith has given the public more than it needs and can possibly hope to absorb; but, in doing so, he has increased dramatically the possibility that something on that site will appeal to a more robust audience of varied tastes and interests.   And in appealing to that broader public, someone is likely to take away a powerful lesson about what the Great Chicago Fire means to us today and how its memory can influence that city, and other cities, in the future.

Most importantly, public history on the Web enables powerfully immersive visual and sensory experiences that have largely been missing from the history I tend to experience. I agree with Mark Tebeau that the voice of someone who lived through a historical event describing what happened and what it meant to him or her is powerful. Such voices, he rightly contends, “call forth memory, time, and context” (Tebeau, 28).  Can a monograph achieve that end? Perhaps … in some cases. But the experience is not the same. Even the virtual tours of American heritage sites like Monticello that Anne Lindsay described can create a sensory experience of history that the printed monograph cannot achieve.

I often recall the ridiculously cerebral character in the 1984 movie Ghostbusters, Egon Spengler (played by the late actor Harold Ramis), responding to another actor’s question about what books he liked to read. Egon simply deadpanned the following line: “Print is dead.”  Granted, that remark was written for comic relief. That statement has haunted me for 30 years, because I’m someone who loves to immerse himself in the power of the written word. But that statement also made me aware that  not everyone experiences the written word in the same way and that historians can and should pursue other possibilities for an immersive historical experience to enhance the power of history. In other words, in keeping with Egon’s assertion, we need to find ways to bring “print,” and history, to life in order to immerse people in the experiences of the past. Today, the digital world allows us to achieve much of that goal. One example is how today’s digital world allows us to experience the past visually and aurally, a capability that makes  history all the more compelling to many people.

Bruce Wyman, Scott Smith, Daniel Myers, and Michael Godfrey argue collectively that “people are becoming different types of learners” and require new ways to experience history (Wyman et al, 462). I agree fully. The more we can do to immerse an interested public  directly into a multi-sensory historical experience, the more that that history will mean to them in the context of their own lives. For my first book, I walked the very ground in Belgium where the battle I was researching took place, and I had the ability to interview scores of surviving veterans from both sides about their experiences in that same battle.  These remarkably immersive experiences were life-changing for me, but such opportunities faded quickly as the veterans passed on and as former battlefields became private property. Such experiences are rare for historians today — and even rarer for the interested public. But, as Wyman and company have testified, museums are a great place to leverage the emerging sensory capabilities of the digital world in order to replicate for future generations what I experienced in the early 1980s researching that book. Wyman and company’s strategic thoughts for an immersive, interactive historical experience in a museum are on target. In fact, the most important guideline they proffer, in my opinion, is to “[r]emove barriers to content and experience” (Wyman et al, 467). The average museum visitor should not need a PhD in history or computer science in order to wade through layers of technological fanfare and dense content just to experience the past interactively. A clear goal and a consistent content approach, as the authors contend, are crucial to the success of any interactive, immersive experience.

Furthermore, Melissa Terras has described the vastly different historical interests people have exhibited based upon her analysis of the most commonly accessed digital archives at places like Oxford and Cambridge. Thus, the immersive experience must cater to these wide-ranging interests. I can see the point behind Roy Rosenzweig’s ambition to make history more democratic. History must be useful, but it has to mean something to us first so that we can use it. Thus, history must be accessible to all — not a select few. I think the digital world of today can get us there.

Steve Rusiecki

Week Seven Reading Blog: ‘Historicizing’ Geography is Long Overdue

Perhaps no other visualization technique in the realm of digital history excites me more than the ability to link an event to the place where it happened — in both a geographic and a geo-digital sense. My previous blogs have allowed me to articulate the importance I place on history as a highly visual academic endeavor. History happened to real people living or traversing real places, and I think the static representations in hard-copy books (maps, still photographs, etc.) have finally gone the way of the Dodo bird thanks to digital mapping and modeling resources such as Google maps and ORBIS. Showing where history happened in a spatial sense — and linking large corpuses of data to specific geographical locations as described by Tim Hitchcock and Stephen Robertson — will not only lead to expanded historical knowledge but will also allow the average person to interact with data represented on a map, to recognize spatial patterns from that interaction, and to replay events digitally (at varying levels of fidelity) in order to take away useful lessons that may influence the present and future.
Needless to say, much of what I am discussing is in the context of maps used in military history. War and its attendant battles all happened in both time and space. The principal object of war, according to Clausewitz and almost anyone who has experienced war, is to destroy an enemy army at a specific geographical location at some point in time. Thus, the only way to communicate the importance of a particular battle is to show how it unfolded on the very terrain where it occurred. The readings opened up mapping possibilities for military history that exceeded my wildest expectations. The ability to use programs like GIS to link a textual analysis or narrative of an event directly to the place where it happened, as suggested by Hitchcock, is incredible.
Not surprisingly, and since my other blogs have testified to D-Day as my current subject of historical inquiry, the first thought that came to mind was a digital representation of the Allied landing on Omaha Beach on 6 June 1944. Historians are still debating today how the American forces managed to get off the beach under such strong German opposition and chaotic conditions. I can’t help but imagine a mapping approach that mimics Hitchcock’s and Robertson’s use of GIS and Google maps as a way to portray geographically the individual squad and platoon actions that carried the day on Omaha Beach. While Robertson was able to conflate both data and location to reconstruct a Harlem street corner’s inherent social make-up and frictions, the same could occur with a battle map of Omaha Beach. An interested user could click on any part of the beach at any point in the landings to identify the landing unit, its casualties, and its actions on the beach. In this way, the user could scroll through the entire map, select what he or she considers key terrain, and match the Allied advance on the beach to specific units, perhaps developing a fresh perspective on how and why the Allies prevailed that day. Did individual small-unit actions really carry the day? Did the planning and rehearsals for the invasion really coalesce on the beach as intended to allow for a tactical success by day’s end? Digital mapping can add a new perspective to address those questions. I think this interplay of digital mapping and data gets to core of what Edward L. Ayers and Scott Nesbit see as “deep contingency,” which for them is “inherently spatial” (Ayers and Nesbit, 9-10). In other words, true agency is most evident when engaging not just space but scale. For example, small-unit actions on Omaha Beach, when considered in the context of the soldiers’ collective or individual agency, did not occur in separate “silos” but potentially complemented the actions of others along that mile-long beach, resulting in a known outcome but not necessarily a clear understanding of how that outcome came to be. Just imagining how I could have applied these tools to the maps from my own two books intrigues me to no end.
I am less taken with modeling programs like Elijah Meeks’s and Karl Grossner’s ORBIS, which, according to reviewer Stuart Dunn, aimed “to model the costs and times of travel between different points within the Roman Empire, over land or by sea or river.” Even though ORBIS, according to Meeks, is wildly popular and well grounded in historically accurate data, it seems like an exercise in counterfactual history. For example, someone could use it to test the cost and travel times from a randomly selected Point A to a randomly selected Point B within the Roman Empire just to see what the program might spit out. Granted, the results may inform our understanding of travel limitations in the Roman Empire in a broader sense. But the problem for me is that that particular journey may never have happened. Thus, it becomes less history and more a simulation of what might have been. I agree with Dunn that programs like ORBIS, when placed in the context of history, push historians into “perilous territory.” Does something like ORBIS make us historians or prognosticators? Or, worse, do we become revisionists based on what might have been and not what really happened? I readily acknowledge that simulation modeling can be very effective. The U.S. Army has been using it for decades, specifically with programs configured with known weapons capabilities, predictive doctrinal behaviors of potential enemies, and so on. These programs help to create what might happen, not what happened; and, most importantly, they are great training tools. In any case, I am happy to know that ORBIS is popular, because any interest in history is a great thing to me. But Dunn is equally on target when he states his concern that future works of history, as articulated through the prism of a program like ORBIS, might play principally to a “populist rather than academic” audience. I think what Dunn really means is that history might become nothing more than a game to be reworked and revised in hindsight, subject to the influence of ahistorical, man-made algorithms that appeal mostly to the “gamers” of society.
In the main, I am very impressed with the potential that geospatial mapping offers to our understanding of historical events. The examples provided by Hitchcock and Robertson are incredibly tantalizing, but we have to keep in mind that just because the results are generated digitally, our analysis of those results may not keep pace with the outputs. Robertson admitted to spending six years studying Harlem through the prism of these geospatial tools in order to recover the everyday lives of black people living there in the early 20th Century. In my mind, the most worthwhile things are seldom ever easy to produce, and I would strongly consider investing six years in a data-linked, geospatial representation of Omaha Beach, particularly if that representation could teach our junior leaders in the Army today the critical importance of their individual decisions and actions on the wider battlefield. That level of “deep contingency” in the context of a spatially defined battlefield matters most to me — because I’ve experienced it personally.

Steve Rusiecki

Week Six Reading Blog: Visualizing the Past is a Great Thing

Like many people, I tend to respond positively to visualizations of ideas, concepts, and topics. For me, history is a very visual enterprise, and I have never been able to imagine producing history without some accompanying visual representations to enhance the prose and the ideas I set forth. For the books I’ve published on World War II topics, photographs and maps have always been my visual mediums of choice.  In fact, I still get my hackles up thinking about the “battle royals” I had with my publishers over how many photographs and maps I could include in my books.  They always low-balled me from the outset, so I had to scrape for every additional photograph or map. Alas, publishing is capitalism at its finest. The more visual representations in the book, the more these additions affected the publisher’s bottom line.  The scars I suffered in these battles are still with me, and I’ve never felt that my works were as complete as they could have been without the full complement of photographs and maps that I intended to use.  But now, with the emergence of digital history, and new ways of providing digitized visualizations beyond simply maps and photographs, those “battle royals” may be a thing of the past.  Hard copies may still be limited in what graphics and photographs they can include due to cost considerations, but electronic Kindle editions and the use of Web sites to supplement published hard-copy works certainly offer a more feasible and cost-effective approach to extensive visual portrayals of the past.

Although maps and photographs have been my traditional “visualizations of choice,” I have always been drawn to histograms, tables, and other graphs as possible ways to portray selected bits of information visually, particularly when depicting significant changes over time or for quantifying particular assertions. Yet developing such quantitative representations have always proved daunting for me, principally because I believed that they failed to capture the ‘fuzziness’ of some interpretations appropriately.  For that long-standing reason, Johanna’s Drucker’s article about the subjective, interpretive nature of certain data — which she labels as “capta” — made perfect sense to me.  My past experiences with attempting to employ otherwise quantitative tools to portray subjective data, or capta, have always come up flat. I worried that graphic portrayals that could not account for the ambiguity inherent in interpreted data might suggest an attempt by me to engage in “quantitative manipulation” or some other form of fallacious reasoning to make my point. Frankly, I’ve always been skeptical of statistics and other data, since they are prone to such easy manipulation.  And, for that reason, complicated graphs and tables have been something that I’ve simply “jumped over” in my readings of historical monographs. Most of the time, figuring out what those visualizations were trying to say proved too aggravating and time-consuming. Few of them met John Theibault’s standards of being “transparent, accurate, and information rich.”

But the examples provided by Drucker in two of her article’s figures, Figures 2 and 4, really worked for me. They showed quite readily, as she intended, an alternative way of portraying data (or capta) that was, in her words, “taken” and not “given”; they depicted the “fuzziness” inherent in the capta – the external and internal factors that shaped and re-shaped that information without scuttling the larger point being made.  Clearly, Drucker has made both a philosophical and practical point of the highest order;  historians must — and I mean “must” — find visual techniques to present capta in a way that represents the humanistic methods that generated it, methods that, as Drucker claims, “are counter to the idea of reliably repeatable experiments or standard metrics that assume observer independent phenomena” (Drucker, numbered paragraph 13).  In fact, Lauren F. Klein, in analyzing Jefferson’s correspondence for “breadcrumbs” about James Hemings, cites her research efforts as proof of Drucker’s broader assertion that graphic techniques applicable to the empirical sciences can mask  the subjective biases of historically interpreted capta.  I agree, and I can’t wait to experiment with visualization techniques that will subordinate the quantitative to the qualitative.

Many of the sample visualizations that John Thiebault included in his article stirred my imagination and opened up many possibilities for effective data-visualization techniques. Density maps in particular demonstrated both transparency and meaning; they communicated to me, through colorized graphics overlayed on actual geographical  representations (like the entire United States), the ability to track specific events over space and time. This approach intrigued me most because of the possibilities of using similar maps (albeit with animation) to portray the movement of specific units engaged in, say, a World War II battle.  In fact, the idea of recreating maps from my two books in such a format fascinated me; by posting them on the Web as supplements to my hard-copy books, readers could follow troop movements and engagements in real time over actual maps of the terrain, creating a visual narrative that would not only complement but perhaps enhance significantly a reader’s understanding of a battle’s flow and the inherent friction in war. As Thiebault rightly opined, “Animation increases [the visualization's] interpretive force dramatically.” Oddly enough, back in the late 1990s, the U.S. Army tried to use a similar visualization tool with computers to track combat forces in real time over actual terrain as a battle was unfolding. The tool did not work exactly as planned but instead morphed into something better that the Army uses today. Theibault’s visual examples remind me of those early battle-mapping graphics and how quickly such things can develop into other, more effective tools over time.

The visualizations that are least effective for me are the ones with nodes, “edges,” and abstract visualizations — basically network graphs. I prefer to see visual information grounded contextually in something familiar, like a map or a basic graph. The graph that Elena M. Friot generated with the Gephi program simply does not work for me.  And I’m inclined to agree with Scott Weingart’s assertion that “network structures are deceitful,” primarily because they rely so heavily on adhering to specific input rules. If you don’t enter your values (or whatever) properly, God only knows what will come out at the other end. Probably the Frankenstein’s monster of all graphic portrayals — a hodgepodge of complexity and confusion. But visualizations are a good thing — a very good thing – and I want to engage in more opportunities to use them with my work.

Steve Rusiecki

 

 

Week Five Reading Blog: The Power of Text Mining and Topic Modeling

Since this course began, I found myself reading ahead in an effort to find those digital tools that might prove most useful to me in my research. The two techniques that excited me the most were text mining and topic modeling, each of which suggested to me ways of negotiating large swaths of newspapers related to D-Day and the months leading up to the invasion without spending days and possibly months sifting through irrelevant online or on-site archives.  Now I could compile the relevant sources quickly, but I also recognized that these mining and modeling efforts could not supplant my need to read every relevant primary source in its original form — as an OCR image or otherwise — in order to evaluate its content and context properly.  I was equally excited to know that these tools could spit out graphs and histograms that could add a visual impact to any assertions I might make from a quantitative perspective.  Once again, however, I am mindful  of the strong thematic thread running throughout this week’s readings and the readings from other weeks: know the shortcomings of these tools and proceed with caution.

The assertion made by Frederick W. Gibbs and Daniel J. Cohen that a tool like text mining can open gateways to further exploration rather than [serving as] conclusive evidence”  resonates strongly with me (Gibbs and Cohen, 74).  Like these two historians, I don’t see the results of a text or topic search standing alone as evidence. In some cases, the quantification of hits can help make broader points, such as Cameron Blevins’s foray into defining regional spatiality in certain newspapers. His extensive text-mining efforts allowed him to argue that newspapers “privilege . . .certain places over others,” thus creating an “imagined geography” for their readers (Blevins, 124).  The visual mapping he generated proved equally impressive and useful in driving home his point. But Blevins was careful to point out that the traditional skills of the historian — that is, close reading and contextual source evaluation — has never been more necessary as a way to avoid falling into the superficiality trap.  In my circumstances,  text mining will prove useful for determining the degree to which newspapers featured articles on key Allied leaders leading up to the invasion or, once the invasion began on 6 June 1944, privileged coverage of the Allied efforts in the West over those of the Soviets in the East. The benefit of these searches are two-fold: quantification on one hand and, on the other hand, the ability to read closely for context and meaning only those editions that yielded “hits”.  But I also agree with Ted Underwood that a basic search, which is how he seems to define text mining, will generally give you what you probably expected to find. But what about finding those unknown “gems?” I think the second tool, topic modeling, will help me in this endeavor.

I found Robert K. Nelson’s “Mining the Dispatch” to be a remarkably effective discussion of the possibilities, and potential pitfalls, of topic modeling, which he defines as a “probabilistic technique to uncover categories and discover patterns in and among texts.” What excites me about the prospects of this technique is that the pairing of words as a way to search for macro-patterns within specific topic areas can lead to new discoveries.  For example, in the context of my research, the editorials addressing specific aspects of D-Day in the months leading up to the invasion cover many, many different topics, most of which could be difficult to narrow down and categorize by hand. Topic modeling seems to offer me a way of focusing more quickly on the numerous themes embedded in both opinion columns, editorials, and other articles.  But topic modeling also seems to have the most pitfalls. Micki Kaufman suggests that topic modeling requires a specific skill-set that most historians lack. If Kaufman is correct, then many historians who struggle to master such a skill (or skills) may find themselves avoiding the tool altogether or grossly misinterpreting their results. Or, as Ted Underwood suggests, if programming skills become necessary because the project’s scope is too great, many historians may opt out. Frankly, after reviewing the  graphic portrayals of Kaufman’s analysis of Kissinger’s Memcons and Telcons using the topic-modeling software MALLETT, I had difficulty making sense of the images and what they were trying to tell me. They supposedly portrayed meaningful word correlations; but, in the absence of further explication, they simply puzzled me.  I guess my mind just doesn’t work that way. Nelson’s approach seemed more in line with what I would consider to be possible given my own skills, particularly the categorizing of topics into written categories.  Topic modeling is one digital tool that I want to try for my project, but I’m a bit apprehensive of what the results may yield and my ability to interpret them properly.

Overall, text mining and topic modeling intrigue me greatly. The only thing to do now is put them to use. But some big questions remain for me.  How will  I apply software such as MALLETT and Google Ngram viewer to the newspaper databases found on Newspapers.com, Chronicling America, or ProQuest? The more I consider the technical aspects of putting these tools into practical use, the more uneasy I feel. But isn’t getting out of one’s “comfort zone” the path to bigger and better discoveries?

Steve Rusiecki

Week Four Reading Blog: Deconstructing the Database’s Perks and Perils

Our previous weeks’ readings consistently reflected both enthusiasm and caution about the new and not-so-new digital tools available for all historians. In keeping with this message, the digital “database” has now become that next digital-history resource that can potentially elevate historical scholarship to a new level – “digital history 2.0,” according to James Mussell — while still retaining the potential to “scald” unwary and neophyte users.  But for me, the readings suggested that the digital database, if used properly and for selected purposes, can be a boon to the average historian — even the most digitally challenged of us. Yet in spite of my enthusiasm for the digital database, Lev Manovich’s characterization of the digital database as the ‘natural enemy’ of narrative resonates strongly with me. My own experience with Edward Ayers’s The Valley of the Shadow Web site gave me reason to believe that Manovich’s point has merit. Thus, the message to me is clear: Proceed with caution, and understand the strengths of weaknesses of the database before putting it to use .

The first message I took from the readings was the need to recognize  the true nature — both good and bad – of the digital database as described by Manovich. I am inclined to agree that a database is simply a digital archive in which each of its stored components, all holding equal status, depends upon the skill and intent of the user to unleash that database’s potential. But, as Patrick Spedding cautioned about the limits of the ECCO database, recognizing the shortcomings of those digitized holdings is absolutely essential to making effective use of them. Understanding the shortcomings of OCR and recognizing error rates in digitization, as pointed out by Simon Tanner in an earlier reading and which Spedding further underscored, are all critical factors in making effective use of the database.

In my own tinkering with the historical newspapers archived on ProQuest’s site, I learned quickly that each search for a key term produced not simply easy-to-digest results but actually a whole new database.  In other words, I engaged in the digital manipulation of a database that Manovich discussed in the simple act of saving my results to the “My Research” feature in ProQuest. In effect, I had created a new database from which I could potentially apply further searches with greater granularity.  But I stumbled a bit here, since I could not figure out how to conduct these more refined searches from my newly created database.  What I did discover, though, was that ProQuest’s search function queried the actual OCR-produced scans of the newspapers.  And instead of identifying complete phrases, the search produced results according to each word in a phrase. Thus, my new database became filled with needless hits based upon individual words in the phrase, most notably the preposition “of,” instead of a complete search for the phrase “invasion of Europe.” Even so, the results quickly narrowed the database for me and presented some discernible patterns. Most importantly, they packaged for me into a newly defined database the actual primary sources I needed with which to apply the historian’s traditional qualitative analysis. I kept in  mind Sean Takats’s concerns about the abundance of source material as I tinkered with ProQuest, but my ability to reconfigure and limit the initial database helped alleviate some of those concerns a bit. I was able to follow my “hits” directly to the OCR-scanned facsimile of the articles and assess them individually.  But I still found myself sorting through a lot of superfluous stuff. Thus, the act of sifting through numerous sources, some useful but many not, showed me that the traditional approach to researching history still applies — but the research part now seems much more efficient  thanks to the digital database.

The second point from the readings that grabbed my attention was the idea that databases can now allow historians to make subordinate points within a  broader argument without engaging in extensive — and possibly digressive – research. W. Caleb McDaniel described how some scholars used search “hits” to quantify and support “points that were secondary to their arguments,” but the danger rests in what Lara Putnam cautioned as “superficiality or topical narrowness.” I am inclined to agree, yet I find this use of the database particularly intriguing for my own dissertation research. For example, my focus will be on examining how the radio and print media portrayed D-Day as it was happening on 6 June 1944. But I wanted to explore as a subordinate matter the degree to which newspapers “talked up” the invasion in the six months leading up to the event. The idea of reviewing extensive six-month samplings of multiple American newspapers to support a smaller point contained in one or two paragraphs seemed quite daunting and not the best use of my time.  I liked the terms that Lara Putnam used to describe the possibilities of making transnational connections to historical arguments through searches among multiple databases — “side-glancing” and “term-fishing.”  These terms helped me to conceptualize how databases can enable the inclusion of subordinate points within an argument without necessarily crossing external boundaries as Putnam intended. In effect, the point made is strictly contingent upon a targeted — but hopefully not superficial — acknowledgement of another factor that bears directly on the core argument without the need for an in-depth examination of numerous primary sources. But “hope” isn’t a method; and, in spite of my attraction to the concept,  I’m concerned that such results may in fact make narrow, tenuous points that won’t withstand scrutiny. My greater fear is that historians more broadly may tend to rely on basic patterns gleaned from databases to make many of their key points. My own thinking here is incomplete, and I may have actually talked myself into the very pitfall that concerned Putnam — a tendency toward superficiality. Frankly, I won’t know how I feel about my own re-defined notions of side-glancing and term-fishing until I put them to the test.

In sum, I see databases as a great thing for historians, but I’m skeptical about characterizing them as a genre unto themselves. Manovich’s article makes an interesting case for the database as some new “cultural form,” but I can’t help but see databases (at least for the moment) as just another digital tool historians may leverage to help accelerate and enrich their research. In other words, a database is really just an archive without the dust.  And so I proceed with cautious enthusiasm!

Steve Rusiecki