Friday, 26 October 2012

Electronic Publishing



Discussion of texts:

Kay and Goldberg, Personal Dynamic Media
John Willinski, Toward the Design of an Open Monograph Press
Peter Suber, A Primer on Open Access to Science and Scholarship


The more I read for this class the more amazed I am of, as is pointed out in the introduction to the Kay and Goldberg reprint, the uncanny ability of some people to predict the future of technologies long before they come to fruition. I must point out, however, that there remains a fine line between a visionary and a crackpot, and for every person with a fantastical idea that actually comes true there are 99 people who are remain crazy (Xanadu come to mind specifically).

If we treat Kay and Goldberg as visionaries, though, there are several key aspects of their ideas and experiences as described in the article that perhaps help explain why they didn't end up in the loon camp. Their research letting children program computers was particularly intriguing for the results: youth walk into a project with no preconceptions of what is and isn't possible, and are far more inclined to original innovation as a result. The article states directly that the kids they worked with looked at programming for what it "ought" to do, not what it had previously done and therefore might do. I think part of what the authors brought to notebook computing was a similar perspective (as a result of this research or independent of it I'm not sure). Specifically in the line that reading on a computer "...need not be treated as a simulated paper book since this is a new medium with new properties" they show a tendency toward true innovation; they aren't just slicing bread and calling it a new creation, they're questioning if and why 'sliced' was ever the best way for bread to be in the first place. Vitally they also add the catch that the resulting product must "...not to be worse than paper in any important way", something that I'm sure many failed designs did not do, particularly those that seek to imitate the real world but collapse it into (effectively) 2 dimensions.Their genius only went so far, however, as other functions they describe are less innovative. In the painting function, for example, "a brush can be grabbed with the “mouse,” dipped into a paint pot…" and then used to draw. This fundamentally goes against the standards they set for the reading application by replicating arguably the biggest draw back to real world painting and adding no improvement to compensate for what is lost (eg. fine control of motion). Kay and Goldberg may have been visionaries, and may have been proven right by time, but they were also ultimately just a single domino in the line that has taken computing technology this far.

This topic overlaps conveniently with some very interesting discussions I've been having in my LIS course about online publication, and I can't help but incorporate some of the things pointed out in these conversations, so consider this my disclaimer that not all of what follows are my own original ideas, but rather extensions of things that have come up elsewhere.

I am suspicious of Willinski's use of statistics to demonstrate the switch from monographs to journals. Everything he states is in percentages and ratios, both of which can easily misrepresent the raw numbers. Even if the figures he gives aren't misleading however, I think it switch may be more representative of the consistent rise of the Sciences (at least comparatively if not directly to the detriment of the Humanities) in universities. Scientific research has frequent publication at a high turnover which (as pointed out in Suber's article) is better suited to journal publication. If there are an increasing number of students in scientific disciplines who need access to these journals then naturally more of them will be purchased, and in increasing proportions over time. Monographs, however, remain important for Humanities scholars, and Willinski admits that sales in many Humanities disciplines remain strong, but if the number of students enrolling in English or History, for example, remains constant, then it is only logical that the number of books purchased by libraries to support them would also remain constant.

Furthermore, Willinski's solution to a perhaps misunderstood problem is itself flawed in some respects. As he states, the benefit of a monograph format is that it can "...work out an argument in full...provide a complete account of consequences and implications, as well as counter-arguments and criticisms". The Humanities and monographs have a symbiotic relationship for this very reason: Humanities scholars go back, sometimes hundreds of years, to longstanding sources to build comprehensive arguments that can only be expressed in monograph form, as a result of this thoroughness the monograph they produce may still be of use to future scholars hundreds of years down the road who are building a similarly thorough argument. Willinski's attempts to move monograph online, where their lifespan will in all likelihood be much shorter, all but defeats the purpose of writing lengthy works to begin with. It makes the electronic monographs in many ways as transient as journals.

The push to move online as outlined in both Willinski's and Suber's articles poses several intrinsic problems, only some of which the authors discuss. The issue of peer-review, for example, is explored fairly thoroughly by Suber, and his arguments hold well enough for a reputable open source journal or repository. The nature of these projects, however, is that anyone with the effort can create an online journal, and there is no enforcing body beyond reputation to implement a thorough peer review process, but reputation takes time to accrue, and at its best can be fairly subjective. This means that researchers run the risk of either accessing an open source repository with no verification that it's contents have been properly vetted, or they access only those that are acknowledged as reliable, potentially creating an oligarchy of open source journals. This takes us to Suber's point about pay journals being more likely to be corrupt than open ones. If you have only a few trusted open source article sites, whose turn out is limited because of their screening process, what is to stop them from beginning to charge more for each article considered in order to ensure their overhead costs are covered even during times of lean submissions and publications? And why would they suddenly reduce those costs again when the number of incoming papers increases? Greed is an unfortunate portion of human nature, and I wonder if any 'open source journals' can or will maintain the values on which they were founded.

My final point is about the current importance of citation analysis in universities for hiring and maintaining professorships. The topic is mentioned in passing by Willinski, but I think it is a far larger consideration for the future of electronic publishing than either author suggests. As much as some studying bodies are moving towards an analytics approach rather than a 'times cited' model, at this point tracking source use through unindexed sites is far more difficult than in print. More importantly though (as per a LIS discussion) some institutions are apparently now setting standards as to exactly which publications count in terms of performance reviews. In this context open source repositories may let scholars 'take back' the forum of journals, but that power is for naught if what their publishing doesn't help promote their careers. I fear online publishing efforts may well hit a wall soon if this becomes a norm, and some collective agreement, however unspoken, will need to be reached about the value of open source journals before they can become more valuable. 

Sunday, 21 October 2012

Multimedia

Discussion of text:
Manovich, The Language of New Media PDF (first two chapters, pages 43 to 114)

My first comment on this text has to be a distracting number of grammatical errors. I'm not even sure 'grammatical' is the way to describe it. More like typos where the incorrect letter still resulted in a real word and the spellchecker didn't catch it. Apparently the author wasn't into proofreading. I bring them up first because of quantity (there are at least four on pages 50-51 alone and they only become more frequent thereafter), and severity: some of them were bad enough that I had to reread the sentence a few times to figure out what he actually meant to say. I hope the international students in the class aren't bogged down by them. The author's name, Manovich, suggests that it's possible their first language isn't English, and therefore a certain level of mistakes are understandable, but some of the errors are grievous enough that they cannot be pinned on a language barrier alone (Van Gog, for example, on page 53). [An addition as per the discussion in class: I maintain even as a first draft there are an unbelievable number of mistakes. Particularly those referring to proper names are mind boggling.]  


As for the content, honestly I walked away from the text with a very limited understanding of what the author's point was. At the beginning of each section I could follow what topic he was going to be discussing, but by the time I slogged through a series of cultural references (most of them, in my opinion, used to keep the book 'hip' and not because they helped explain the idea) I had completely lost the thread of what he meant. For example I understand that he disagrees with the five points that supposedly define 'new media' but I have no idea what he thinks actually defines it. But this is starting to sound like a book review, and that is not the point of this blog.


I fundamentally disagree with the author's assertion (on page 87) that the printed word is being surmounted by cinema. In the context of what he's talking about, I am pro multimedia: being exposed to the same issue through different formats can only increase understanding, but neither in computers nor in the rest of world is text losing importance. If you want proof just look at the slew of movies being produced that are based on books; if printed works weren't being produced cinema would be lost. Even in terms of webpages, while the YouTubes and Flickers of the world have become incredibly popular, I would need to see some VERY convincing statistics to believe that sites where text is a major component (if not the main component) are not multiplying faster.


I did, however, enjoy his discussion of modern computer screen capabilities and limitations, for example that, unlike previous mediums, modern screens can present realtime changes but is still limited to a single perspective and limited actions. I remain skeptical that virtual reality monitors will negate the latter problem; ultimately the user is still going to be tied to what was programmed  into the system, and I maintain that no matter how no matter how diligent the programmer a user will inevitably find a way to move which was unanticipated and the system won't be able to handle. As amazing as a holodeck would be, I don't think anyone alive today is going to live to see one. 


One of the most interesting ideas the author brings up for me, though one that goes away to some degree from multimedia, is the punchcard controlled Jacquard loom. Manovich discusses it as an early example of computers being used to produce images, but I find it fascinating in the context of computerized production. In a world of mass production and automated assembly lines, computer use in industry is now a common-place occurrence, but I had  no idea it went back to 1800. Maybe earlier. I'm not really well informed on history past about 1700. Bringing it back to multimedia though, I'm thinking less of making cars, for example, or even textiles (though this is the obvious continuation of Jacquard's invention), and more of the creation of precise three-dimensional didactics by first modeling them in a computer and then having the said computer build a physical representation (usually out of plastic) layer by layer. I have seen such projects (out of the U of A actually) done to recreate archaeological sites, showing what the buildings would have looked like thousands of years ago. This is more an example of computers being used to create multimedia than multimedia existing in computers, but if Manovich gets to go off on a long discussion of performance art then why not.

Sunday, 14 October 2012

Hypertexts

Discussion of Readings:


I'll begin with Conklin's article, since it describes the general concept of hypertexts, while Bush's "As We May Think" is an example of one. The most difficult part of reading "Hypertext" for me was that, being published in 1987, the ideas and examples it lists are at least 25 years old at this point. Computer displays and input devices have changed so much in this time that I have trouble visualizing what he means, or, in cases where screenshots are provided, understanding how the programs would have been used. I am going to start, therefore, by trying to provide modern examples of the "four broad applications" Conklin names, though this is as much to solidify the ideas in my own mind as it is to comment upon them.

The 'macro literary systems' are simple enough to compare (unless I've completely missed the mark) to a wiki: a database that holds multiple text-based articles (files), with links providing shortcuts between files, as well as within a file. I don't know where the graphical representation of the links that Conklin maintains must be present (although he goes on to list numerous examples where it isn't) comes in, but maybe there is such a feature that I have just never used.

The 'problem exploration tools' are somewhat more difficult for me to understand. Conklin describes them as "early prototypes of electronic spreadsheets", easy enough to picture as a user of Excel, and says that users can hide sections and subsections, which makes me think of iTunes and showing only the song information you care about as columns in a playlist. He goes on, however, first to discuss "teleconferencing" and moving between threads, then ranking other people's posts, which sounds like a message board: a series of posts and comments provided by multiple users which form a tree structure as people comment and then comment on comments. The last example he gives, though, 'outline processors' are effectively interactive table of contents, such as can be found on some Adobe pdf files, suggesting that the technology he's describing split into two streams (quite possibly before this article was written, though no one was aware of it yet).

Examples Conklin provides describing 'structured browsing systems' explain fairly clearly what they are; the help manuals attached to word processors apparently haven't changed much (except to add search functions, perhaps). The processes he outlines also sound very similar to how internet browsers work as well though: the user follows a series of links through different pages, and the browser keeps track of the path followed to allow backtracking.

In the 'general hypertext technology' section, the first few examples seem to have aspects of a course management system, or bibliographic tools such as Zotero, in that it is a place to electronically collect webpages, pdfs, files, etc. that re relevant to a topic, and to organize them into a tree structure using headings and subheadings. These do not, however, have what I would call links between the files, rather links to the files from a main page. Furthermore the later examples vary from this significantly. As Conklin says, this section represents tools meant to explore hyperlink capabilities, not to perform any specific task, so trying to find a general modern counterpart is a futile effort. It remains interesting though that some modern applications did apparently evolve from these tools.

These modern counterparts having been established, it is incredible to think, if Conklin's assertion is correct, that computer scientists were wary for decades of implementing hypertext technology, as it is now one of the most widely used forms of text analysis. Whether this change started out of increased computer use (as a result of reduced cost) or from a few programmers having faith enough in the technology to undertake projects showing what it could do, it certainly continued because of the capabilities and advantages Conklin lists. The ability to navigate through multiple arts of multiple texts with a few clicks or keystrokes was a genuine step beyond printed reading, one I'm not sure has been matched by any other developments.

The discussion of hierarchical vs. non-hierarchical structures takes me back to HTML and XML theory, where all boxes must fit inside other boxes for the system to function, and the limitations this leads to for overlapping features (eg. paragraphs spanning multiple pages). In many ways this is the same, of course; non-hierarchical arrangements are effectively just a series of empty tags, but the parallel concerns are intriguing. 

On to Bush's article:

I once went on a trip to England with a girl who would take in the ballpark of 1000 pictures a day of everything from architecture, to people, to her meals. The only real reason for most of the photos was that there was no reason not to take them: the memory card in her camera could easily accommodate them, taking each snapshot needed only a moments time, and at the end of the day she could instantly erase any she no longer wanted with no added expense incurred. Bush's example of advances in photography allowing scientific process to be easily recorded is apt. As is his assertion that the problem then becomes slogging through such a plethora of data. 

With computers and the internet we face a similar problem. SLIS courses extol to there students that online access to resources will not replace librarians precisely because someone needs to organize and catalogue the information. Bush wrote this article with no idea what was to come, but in his own terms still hit upon a "a future device for individual use, which is a sort of mechanized private file and library". His prophetic descriptions of hypertexts are almost eerie, but they make me wonder whether there is a subconscious collective concept as to how information should be displayed and interacted with, or if there is rather an unbroken line of scholarship extending back this far: if Burns had conceived of his memex differently, would computers and hypertexts look as they do today?

Finally, I also need to point out a few fantastic lines. If Burns' other works are similar to this one in writing style a book of quotations could be made.
"truly significant attainments become lost in the mass of the inconsequential"
"for at that time and long after, complexity and unreliability were synonymous"
"One might as well attempt to grasp the game of poker entirely by the use of the mathematics of probability"