Wednesday, December 3, 2014

Wikipedia Project Reflection


             My ENC4404 Advanced Writing and Editing class has taken on a huge task, a task that required a heavy amount of participation. We decided to create our own Wikipedia article on a topic that we have based our entire semester on: public sphere writing. Not only was it a group project, but a group project that required the entire class to collaborate as a whole. On the verge of completing this task, I would have to say that the hardest part of it was making everyone’s individual inclusions flow cohesively. In his article “The Rhetoric of Intertextuality,” Frank D’Angelo discusses the term, this term suggests that “every text is connected to other texts by citations, quotations, allusions, borrowings, adaptations, appropriations, parody, pastiche, imitation, and the like. Every text is a diagonal relationship with other texts” (D’Angelo 33). This relationship between texts can be reflected directly within my class’ Wikipedia project. Without the use of intertextuality, our Wikipedia page would not have been able to come together so quickly and smoothly. Every section working within our “Public Sphere Writing” Wikipedia article had to borrow information from other scholarly sources in order to create new discourse that fit the guidelines of Wikipedia.

            Since anyone with Internet access can enter into Wikipedia and edit anything they wish to, Wikipedia provides its users with a set of guidelines to follow in order to maintain order. These guidelines are very helpful in creating an article for Wikipedia. On a page titled “Wikipedia: List of guidelines,” a list of important guidelines are provided. These guidelines enable Wikipedia to suggest to its users how to delete, how to edit, how to title articles, how to behave, etc. By following these guidelines and providing content through the use of intertextuality I was able to contribute to the Wikipedia community and further my role as a true Wikipedian.

            Although collaborating with such a large community was a difficult task, another task that proved to be just as difficult was my individual work making sure sources were provided for each substantiating claim. The reason this task became so complex was because of the size of the article and the amount of sources and information provided. The amount of information on our “Public Sphere Writing” Wikipedia article is due to the fact that we have intertextually provided a vast amount of accumulated sources from throughout the semester and from other Editing, Writing, and Media courses at Florida State University. This ties back into D’Angelo and his discussion of different modes of intertextuality: “The fifth mode of intertextuality is pastiche. The American Heritage Dictionary defines pastiche as ‘a word or style produced by borrowing fragments, ingredients, or motifs from various sources’” (D’Angelo 39/40). By borrowing fragments from so many sources, our Wikipedia class article expanded into a rather large space. However, having a surplus of information isn’t always a bad thing, as long as all of the information provided is necessary and has sources to back up any claims that are made.

            Ultimately, being a part of this task was a lot of hard work, consisting of piling through numerous sources and making everyone’s sections flow well together. Even though the work was hard, it definitely left me with the rewarding feeling of accomplishment. My class now has an article that is self-published within the realm of Wikipedia. Putting together our “Public Sphere Writing” article required a lot of collaboration and a lot of intertextuality. As I mentioned in a previous post, Wikipedia is defined by collaboration and intertextuality, and I know that even more so now, having complete this assignment. Now that this collaborative project has arrived at completion, I have the urge to go back into Wikipedia’s “stub categories” or “articles to be expanded” to do some even more collaborative editing. This may be the beginning of stages of becoming a Wikipediholic.

Thursday, November 20, 2014

Becoming a Wikipedian


             After completing my first editing task on Wikipedia, I found that it is kind of nerve-racking to edit someone else’s semi-established piece of work in such a public space. It is important to keep in mind that Wikipedia is a collaborative website and multiple people have already come along and made changes to every article on the site. The article I chose to revise was about anonymous web-browsing and although I know little about this subject, I chose it to make minor edits; which Wikipedia considers to be “typo corrections, formatting and presentation changes, rearranging text…” (Wikipedia). In other words, I corrected some simple errors and eliminated some unnecessary sentence fragments. In my opinion, I just polished up an article that was a little rusty. In chapter six of his book, The Future of the Internet and How to Stop It, Lessons of Wikipedia,“ Jonathan Zittrain mentions that "Wikipedia--- with the cooperation of many Wikipedians---has developed a system of self-governance that has many indicia of the rule of law without heavy reliance on outside authority or boundary” (Zittrain 143). Becoming an official Wikipedian and experiencing my first editing task, I can fully understand the aspect of “self-governance” within Wikipedia that Zittrain discusses. With guidelines and features such as the “edit summary” box, Wikipedians are free to make edits without any official authority. As long as every edit made is justified in the edit summary section and abides by the guidelines, Wikipedia is able to maintain a system of self-governance.

            After becoming a Wikipedian, I realize the full extent to the problems people have with Wikipedia. While in the editing section, there is liberty to literally change anything on the article. However, after reading so much about Wikipedia, I know for a fact that large erasures of sections, vandalism, and biased statements are watched heavily by robots, administrators, and loyal Wikipedians. If there is one thing I learned about Wikipedia, it’s that it involves a heavy use of intertextuality. According to James E. Porter, in his article “Intertextuality and the Discourse Community,” “’Examining texts ‘intertextually’ means looking for ‘traces,’ the bits and pieces of Text which writers or speakers borrow and sew together to create new discourse” (Porter 34). Wikipedia, being a collaborative entity, is nothing but fragments of information sewn together with the goal of creating new discourse. Without the pre-existing information needed to create well-developed articles, Wikipedia would not exist.

Ultimately, I have learned that Wikipedia is defined by two things: collaboration and intertextuality. Just as Wikipedia relies on intertextuality for its existence, it also relies on the collaboration of its Wikipedians to put those fragments of information together in a cohesive manner. The collaborative aspect of Wikipedia is what allows such a broad spectrum of information to become available at such a rapid rate. All in all, becoming an official Wikipedian has caused me to realize how Wikipedia really functions as a “sandbox.” Just as children collaborate and play “nice” in a sandbox, Wikipedians collaborate and respect one another’s opinions, additions, and edits to articles. After stepping into the realm of Wikipedia and becoming a Wikipedian by making some minor edits, I am ready to play in the sandbox and make worthy additions to my Advanced Writing and Editing class' Wikipedia article on public sphere writing!

Tuesday, November 11, 2014

Composing for Wikipedia


                        Wikipedia does its best to keep its articles informative and reliable. It does so by adhering to a set a standards for composing articles. These standards can be referred to as their core principles and includes things such as “Neutral Pointof View” and “Verifiability.” In addition, these standards can also be reflected in Wikipedia’s “featured articles” section; which includes a set of standards enforcing well-written, non- biased, and well- researched articles. All articles posted within the “featured articles” section of Wikipedia are required to follow the standards previously mentioned. So, in order to ensure these standards are adhered to, I did some research. In a November 8, 2014 featured article about pelicans, Wikipedia seems to keep to their well-written, well-researched standards. This article features non-biased, informative descriptions of the pelican, while identifying the scientific classification of the bird: “Pelicans are a genus of large water birds that makes up the family Pelecanidae,” this statement goes on to describe  more physical features of the bird in a coherent and unbiased manner. The article goes on to provide the reader with more facts about pelicans such as their taxonomy, fossil records, behaviors, and even use of the bird in religion and mythology. Although religion tends to be a controversial subject, the article remains unbiased and strictly provides readers with the bird’s historical affiliation with Christianity. In doing this research, it’s clear to see that while composing Wikipedia articles its important to remain unbiased and stay focused on forming well-written, well- researched information. In composing a Wikipedia article it is also crucial to keep citations consistent, this is something that the pelican article succeeds in doing: all information is well-cited and links to a reliable secondary source. Not only is the information well-cited, but the images included throughout the article are used properly and follow the image use policy. Although this research shows that Wikipedia does a good job at enforcing well- written and well- researched information, at times Wikipedia can provide what seems to be useless, or excessive information.

            In comparing and contrasting two separate biographies about Henry Sidgwick, one on Wikipedia and one on the Stanford Encyclopedia of Philosophy, I found that Wikipedia included some excessive, unnecessary information. In the Wikipedia article, there is an entire section about a woman named Eusapia Palladino that is nowhere to be mentioned in the Stanford Encyclopedia of Philosophy’s article on Sidgwick. This women played a minor role in Sidgwick’s life, yet Wikipedia spends time constructing an entire section on this woman and places it before a more important section about Sidgwick’s works. In the Stanford Encyclopedia’s article, there is an entire section dedicated to Sidgwick's “masterpiece Methods of Ethics (1907)” (Stanford). It is important to note that articles within the Stanford Encyclopedia of Philosophy remain stable and cannot be edited by the public; Wikipedia is a collaborative encyclopedia and this means that sections can be added and altered separately without the acknowledgment of the original author. So, the Stanford Encyclopedia of Philosophy has a tendency to be more coherent and concise with their information. Being a more academic website, the Stanford Encyclopedia of Philosophy provides users with academic resources such as “how to cite this entry” and PDF links. However, in comparison, both articles provide well-written and well-researched information on Sidgwick’s life, career, and background. Both articles also provide a well-detailed list of primary and secondary sources; however, the Stanford website provides a more organized list of sources, dividing the primary and secondary sources into two separate sections. After doing this research, I found that it is important to stay coherent and concise while editing and adding multiple sections to a Wikipedia article. It is also important to flow with the original author’s content and to avoid irrelevant sections. Nevertheless, the most important standards to adhere to while composing for Wikipedia are having credible research and conveying this research in a coherent and informative manner.

Thursday, November 6, 2014

Plagairism, Copyright, Fair Use, Oh My!


            With the Internet being such a normal part of our daily lives, information is consistently at our fingertips. Although this broad access to all types of information may seem like a positive concept, it actually has some negative consequences. Now that the Internet is such a regularity, plagiarism has also become a consistently common act. However, it is hard to place plagiarism on the hierarchy of academic dishonesty. According to Russel Wiebe in “Plagiarism and Promiscuity, Authors and Plagiarisms,” it is more ethical to cheat on school tasks, “which are largely unreal and therefore, outside the realm of ethical consideration” (Wiebe 30). Even though plagiarism is seen as a negative act, Wiebe sees it as harmless in the realm of inane school tasks and actually believes it to be useful in some circumstances. In his article, Wiebe explains how Kelly Ritter “suggested that students see the plagiarism question not as a question of morality but rather as a question of utility” (Wiebe 36). In other words, students don’t care if it is morally right to plagiarize, they care if it will help them somehow advance.

            Although Wiebe sees plagiarism in the classroom as a harmless act, he still believes that plagiarism is an overall issue in our technologically advancing society and sees the need for different plagiarisms to be defined. In his article, Wiebe explains different suggestions in attempts to categorize different acts of plagiarism. For example, “Moore Howard (1995) has suggested three categories in an attempt to define what should be considered plagiarism: cheating, non-attribution, and patchwriting” (Wiebe 32). These categories reflect directly to common acts of plagiarism: cheating would be directly copying the work of another peer, non-attribution would be directly taking from a source without citing it, and patchwriting would be paraphrasing from a source without acknowledging the original source. Other theorists have attempted to define plagiarism in different, but similar ways. Brian Martin discusses word-for-word plagiarism, paraphrasing plagiarism, and secondary source plagiarism (Wiebe 33). These definitions are very similar to those presented by Moore Howard and are self-explanatory. Even though these definitions help to categorize different acts of plagiarism, Wiebe still views plagiarism as something that can be beneficial. After all “it is probably the rare academic who has not engaged in some form of ‘dishonesty’ in school or in our professional lives” (Wiebe 32). Wiebe makes it clear that most people have plagiarized in their lives and it is sort of a part of creating new discourse; similar to intertextuality.

            For centuries we have been borrowing from previous generations; even in acts of simple graphic design. In The Laws of Cool, Alan Liu explains graphic design from the nineteenth century to today. In his explanations of various types of typography he inadvertently explains how different generations take from previous generations. He explains that “modernist graphic design focused above all on a totality of design… the idea was to look at every page as a whole in which variation and unity were so tightly bound that their very nature altered” (Liue 198). This modernist view of design is later transferred over to the age of the World Wide Web. “By the time of the Web… graphics and digital information became part of the same integral design. Both were aspects of the single great canvas now subsuming all the pages… that the modernist designers had created” (Liu 207). Although it isn’t made explicit, the beginning of Web design derived precisely from the modernist design and could be considered a plagiarism of technique. However, instead of plagiarism, this is just considered an extension of “good design.” 

            Moving away from plagiarism and toward the realm of copyright laws and fair use (these concepts are also complicated to define). In a case study concerning a Michigan University student, Maggie Ryan, fair use of this student’s photo is brought into question. The issue with the use of this students photo isn’t the fact that it was used without her permission, it’s more of how the use of the photo effected the student’s rhetorical velocity. In Jim Ridolfo and Martine Courant’s case study, “Rhetorical Velocity and Copyright: A Case Study on Strategies of Rhetorical Delivery, ”They explain that “rhetorical velocity is a strategic concept of delivery in which a rhetor theorizes the possibilities for the recomposition of a text based on how s/he anticipates how the text might later be used” (Ridolfo & Courant 230).  This anticipation of how the text might later be used is the issue with the use of Maggie’s photo. Maggie intended on getting publicity, but not for her personal photo to promote the University’s campus life, she wanted publicity for her activist protest against her University joining the WRN. Instead of getting noticed for her acts of protest, she was acknowledged as a typical Michigan U student playing in the snow. The rhetorical velocity of her situation was completely skewed by the University. “In addition to directly appropriating her image, the university also remixed her image...it’s clear that not only did the Web team take a picture of Maggie out of context but they also repurposed it by adding the caption ‘winter fun learn more’” (Ridolfo & Courant 228). This further emphasizes how the University re-appropriated Maggie’s photo and remixed it for their own benefit. The issue with Maggie and copyright law is that “an individual cannot use the right of publicity to claim a property right in his/her likeness as reflected in photographs that were taken in a public place to illustrate a newsworthy story” (Ridolfo & Courant 235). So, although U.S. copyright laws are supposed to protect items that are original, it is hard to do so with a human body in a public place.

            It is easy to say that defining plagiarism and copyright issues is a complex thing; however, it is necessary to determine a line between what is considered fair use and what isn’t. “we need to stop thinking about copyright law in terms of what isn’t possible, but also in terms of what is possible—that is, how rhetors can strategically compose for the recomposition of their own intellectual property” (Ridolfo & Courant 238). The most important thing to take from Maggie Ryan's case study is that it is crucial to recognize an author’s rhetorical velocity before merely recomposing their work in order to respect the original intention of the rhetor. Copyright and fair use laws are a little more complex to define than acts of plagiarism. With plagiarism, these acts can easily be resolvable by simply citing sources correctly. Plagiarism is a commonality, especially in today’s Internet savvy society. Plagiarism can easily be turned into intertexuality by simply knowing how to properly cite quotations and paraphrased statements. Unfortunately, copyright issues are resolved in a more complex manner, requiring the author's permission and acknowledging the author's rhetorical velocity before repurposing their work. 

Thursday, October 16, 2014

Did You Know... Wikipedia Can Be Reliable?


Wikipedia is a well-known, online encyclopedia and “is written collaboratively by largely anonymous Internet volunteers who write without pay. Anyone with Internet access can write and make changes to Wikipedia articles, except in limited cases…” (Wikipedia). It is important to note this because Wikipedia is clearly a place for citizen journalism, or public deliberation. As James McDonald mentions in his chapter “I Agree, But…: Finding Alternatives to Controversial Projects Through Public Deliberation,” from the book Rhetorical Citizenship and Public Deliberation: “public deliberation is even seen as constituting citizenship: individuals become citizens by discursively—and thus rhetorically—engaging one another in the public sphere” (McDonald 199). Even though McDonald’s chapter is about citizens in public forums for political issues, it still largely relates to Wikipedia’s collaborative features. Since Wikipedia enables “Anyone with Internet access” to make changes to its website, it automatically becomes a part of the citizen genre. Now, because of this eligibility for citizens to make changes to the information on Wikipedia’s website, it has received many critiques on the reliability of its content. However, through the use of projects such as WikiProject Skepticism, WikipediaVerifiability, and Reliability of Wikipedia, they attempt to make their information as credible as possible.

 Wikipedia uses WikiProject Skepticism to monitor "articles related to Scientific Skepticism… The project ensures that these articles are written from a neutral point of view, and do not put forward invalid claims as truth" (Wikipedia).  This page includes goals of the project, all which encourage quality information and reliable sources. The Wikipedia Verifiability page ensures that people reading and editing their site "can check that the information comes from a reliable source." In addition to ensuring this eligibility to fact check, it also assures its users that "its content is determined by previously published information rather than the beliefs or experiences of its editors." On this page, they also warn users not to "use articles from Wikipedia as sources... Content from a Wikipedia article is not considered reliable unless it is backed up by citing reliable sources. Confirm that these sources support the content, then use them directly" (Wikipedia). They know the risk of calling themselves a reliable source when all of their information relies on citizen journalists and volunteers from this genre. Finally, the Reliability of Wikipedia page informs users on how reliable the information provided is by discussing "several studies [that] have been done to assess the reliability of Wikipedia." By quoting studies that compare their accuracy levels to the Encyclopedia Britannica, Wikipedia attempts to confirm its credibility with its users. Through the use of collaborative editing Wikipedia is able to quickly remove false or misleading information. However, it is ultimately up to the user to determine the accuracy of the information provided by fact-checking it before relying on it.
All of Wikipedia's project goals seem to aim to improve the accuracy of the information they provide their users with. That being said, I decided to do a little research myself in determining the accuracy of a Wikipedia webpage from their “Did You Know…” section on their homepage. The webpage I chose to analyze was about the Fairy lorikeet, “a species of parrot in the family Psittaculidae,” as Wikipedia claims. After reviewing the sources used to accumulate the information about the Fairy lorikeet on Wikipedia, I found the sources to be quite credible. I find the International Union for Conservation of Nature source used to reliable because the website is directly from an organization. Not only does the information come from an organization, but the site also includes a page confirming their resources. However, this website gets its information about birds from another organization's database: birdlife.org. I still find both of these sources trustworthy because they come from organizations who actually do physical research and document their field work. "BirdLife International is the Red List Authority for birds and as such they have provided all the bird assessments… these assessments and their accompanying documentation reflect the information that appears on the World Bird Database developed and maintained by BirdLife International.” So, through the use of official organizational sources, Wikipedia is able to accurately provide information, such as the Fairy lorikeet's scientific name: Charmosyna pulchella, to its users.

Another source Wikipedia uses to confirm its information is World Parrot Trust. This information seems credible because they also get their information from BirdLife International, who have already established credibility. However, its sources are also linked to a different site called "lexicon-of-parrots.com" I can only assume that this source is credible due to the fact that much of the information provided on this website is also provided from credible sites such as BirdLife International. This shared information is another way writers and editors collaborate outside of Wikipedia; there is a sort of intertextuality at play between each source.  Unfortunately, one source at the bottom of the World Parrot Trust website has me slightly doubting their credibility. The source links to a website called "cites.org," however, when the link is clicked it takes the user to a screen that reads: "Page not found: Sorry, but the page you're looking for does not exist." Things like this can cause a source to easily be deemed unreliable.

            Although the third source included on the Wikipedia page hails from a typically avoided ".com" website, it still appears to be reliable. The information provided on this pages lines up with other sources such as BirdLife International and World Parrot Trust; unfortunately, in order to read more information a subscription is required. On the other hand, the website is completely based off of a published handbook titled Handbook of the Birds ofthe World Alive. So, the website is essentially an electronic version of the information reflected in the actual handbook and can be deemed reliable due to the fact that the information is veritably published as a hard copy

 After checking various facts from the Wikipedia page about the Fairy lorikeet with the sources provided, everything seems to be correct. Although there can be small details that don’t line up correctly, the facts that I have checked are completely accurate and reliable pieces of information. The information provided on Wikipedia is definitely through intertextual context.  As James E. Porter puts it in his article, "Intertextuality and the Discourse Community:" "We can distinguish between two types of intertextuality: iterability and presupposition. Iterability refers to the 'repeatability' of certain textual fragments…" (Porter 35). Within many Wikipedia articles iterability is undoubtedly apparent; especially on the page about the Fairy lorikeet. Many of the descriptions and facts about the bird species is repeated across multiple websites. Ultimately, after reviewing the information collaboratively provided by Wikipedia on the Fairy lorikeet webpage, I find all of the information I have fact-checked to be credible and reliable fragments of information.

Thursday, October 9, 2014

The Progression of Web Design Through Time


            We are situated in a time where the Internet plays a massive role in our everyday lives. Being aware of the time we live in is crucial in the analysis of Web texts within Carolyn Handa’s The Multimediated Rhetoric of the Internet: Digital Fusion. With the advancement of technology, the Internet has not only become an “information superhighway,” but also a “digital super mall” (Handa 83). The advancement of internet technologies has caused the designers of Web pages to be more conscious of how they create new sites for their consumers. The construction of Web pages must now provide more than a simple offering of information, they must be easy to navigate, work flawlessly, and bypass irrelevant content. “Web site design today must therefore involve gathering data through Web analytics programs, mining the data properly, understanding the data, and then translating that understanding into Web pages that are highly effective in the rhetorically fused presentation” (Handa 85). Living in this time where the Internet is a crucial part of so many people’s lives, web designers are forced to understand and translate date so that users can, in turn, understand the translated data they receive. Focusing on time becomes crucial in trying to create effective Web pages and in trying to sell products or services effectively on those Web pages. In M. Jimmie Killingsworth’s book, Appeals in Modern Rhetoric: An Ordinary Language Approach, he includes a chapter about the appeals to time. In this chapter he communicates that time is more than awareness of temporal context as he focuses on term modern. He says that “’Modern’ implies that time has a special value. It’s up to date and better than old-fashioned, outmoded, things” (Killingsworth 39). This view of time as being modern allows us to focus on the concept of progression.

            In both Handa and Killingsworth’s chapters, they focus on the concepts of progression. In Handa’s chapter on rhetoric, context, and culture on the World Wide Web, she talks about Bolter and Grusin’s concept of remediation. “No medium today, and certainly no single media event, seems to do its cultural work in isolation from other media, any more than it works in isolation from other social and economic forces…” (Handa 86). This concept of remediating preexisting content is a direct reflection of progression by refashioning the old into the new. Directly tying into Killingsworth, this progressive concept of remediation leads into the value of time. Consumers want things that are better, faster, newer, and that allow them to use as little effort as possible. By remediating certain aspects of different products, new and modern products are created. “Products that save time are particularly valuable… time is money” (Killingsworth 40). Viewing time as something with value, that can be invested, allows for it to be seen as a substance rather than an abstract measure for our lives slipping by (Killingsworth 40). In the concept of modern, time’s material value combines with the new being more valuable than the old. With this concept in mind, modern Web sites work towards a social impact and this impact is reached through the use of remediating the old into the new, making context more valuable to the user. “The most eloquent Web sites are the ones that understand the fusion possible between various media and use all…” (Handa 118). Through this fusion of various media, a larger audience is able to be reached. However, this “fusion” surpasses the concept of remediation and digs into the concept of multimodality. The way Handa explains multimodality is through a comparison between the two-dimensional flat pages bounded in a book and a three-dimensional collection of items in a room (Handa 155). Through the use of multimodality in Web site designs, users are able to access a variety of different media on one page. This concept of multimodality adds to the progression of time and how users favor new, better, and faster over old and outdated methods.

            After focusing on the advancement of technology, it becomes clear that time plays a crucial role in the progression of these advancements. While time plays a heavy role in allowing these progressions to occur, it also plays an important role in deciding the value of products. As Killingsworth discusses in his chapter on the appeals of time, consumers want products that are better, newer, faster, and that save time. Viewing time as something of value gives it a sort of concrete structure. This modern concept of time being a substance can be used effectively in the construction of Web pages. By focusing on newer and better concepts of modernity, Web designers are able to reach a larger, more popular audience. Through the use of remediation and multimodality, the content and navigational aspects of Web pages should become more valuable in the eyes of its users. With remediation, older content is able to progress through time and become new and thus, more valuable to today’s internet users. On the other hand, aspects of multimodality should already exist naturally: “Thinking of Web site navigation as moving through space should draw on already existing skills” (Killingsworth 155). Ultimately, in designing Web pages in this day and age, it is crucial to think about the concept of modernity, the progression of time, and the concept of time as an actual substance. We already live in a time of advanced technologies, the time to construct effective Web pages is now!

Wednesday, September 24, 2014

Designing Multi-User Interfaces for Collaboration

Just as Nicola Yuill and Yvonne Rogers explain in their report, “Mechanisms for Collaboration: A Design and Evaluation Framework for Multi-User Interfaces,” I agree that multi-user interfaces aren’t necessarily ‘natural’ and that in order to design these interfaces for everyday use there must be “constraints on awareness, control, and availability” (Yuill, Rogers 2). By focusing on constraining these design aspects it is possible to make these devices more natural and useful for communication and collaboration. In order to make these devices more natural and useful for communication, it is important to focus on various social aspects that may have an affect on it's users abilities, such as disabled people or young children whose minds are not fully developed. The whole point of Yuill and Rogers’ study is to discover how multi-user interfaces can be designed to better assist people in collaborative work spaces, including people with social constraints. “New multi-user interfaces represent a qualitative shift in supporting collaborative group work: the freedom of input enables gesturing, speaking, and touching. These can all be seen, heard, and experienced by others...” (Yuill, Rogers 4). The fact that multi-user interfaces enable multiple users to have the ability to experience what one person is experiencing, simultaneously, allows it to be a “superior means of collaboration.”
In their report, “Designing Electronic Collaborative Learning Environments,” Paul Kirchner, Jan- Willem Strijbos, Karel Kreijns, and Peter Jelle Beers of Educational Technology Research and Development, all agree that usability must be a central factor when designing these mutli-user interfaces for better use in collaborative settings.  In their terms, “Usability is concerned with whether a system allows for the accomplishment of a set of tasks in an efficient and effective way that satisfies the user” (Ed. Tech 50). In order to make a multi-user interface a successful collaborative tool, this design aspect must involve what Yuill and Rogers discuss in their report: awareness. While focusing on making the device successful in accomplishing tasks in an efficient manner for collaboration, it is important that all users “have an ongoing awareness of the actions, intentions, emotions and other mental states of other interactants” (Yuill, Rogers 6). In their research, Yuill and Rogers show that various users display signs of awareness when using multi-user interfaces. These signs of awareness include: making running commentaries on their own actions, anticipating collisions by adjusting their positions, and sometimes elbowing others out of the way. “These implicit mechanisms of awareness play a central role in supporting collaboration with multi-user interfaces” (Yuill, Rogers 7). All of these signs of awareness play into making the design of these devices more natural for use by multiple people. However, in order to make multi-user interfaces more efficient in collaborative work, there must be a constraint on this aspect; since, too much awareness can cause issues with use.
This need for constraints is also explained in “Designing Electronic Collaborative Learning Environments.” In this report, it is mentioned that “social constraints and conventions.. play a role in collaborative environments” (Ed. Tech 53). Both reports agree that when designing multi-user interfaces there must be a control of action in order to prevent things such as individual domination or the “free rider” effect, which is described as users in a collaborative setting whom invest only a minimum effort into group performances (Ed. Tech. 54). In their report, Yuill and Rogers recognize the benefits of devices that precede multi-user interfaces mentioning that “A mouse can act in some respect like the ‘talking stick’ that some teachers use as a tangible device to support turn-taking in conversation” (Yuill, Rogers 9). They are essentially suggesting design constraints for multi-user interfaces that control the equal distribution of work among groups for the prevention of too much dominance within the group.
Later in their report, Yuill and Rogers discuss the relevancy of the availability of background information within multi-user interfaces. They explain availability as “what information is on hand in the background to influence users’ awareness and control.. it concerns background information relevant to the task that is accessible for all explicitly over time” (Yuill, Rogers 10). Availability of background information is important in multi-user interfaces, specifically for collaborative projects, because it provides a better understanding for all users accomplishing similar tasks. It also enables what is referred to as the process of negotiation, which “starts when a team member makes as yet an unshared knowledge explicit or tangible to others… After one team member has made contribution, others can try to understand it” (Ed. Tech. 61). This exchange and understanding of information is made available by the information a multi-user interface provides. With the availability of background information users are able to feel more natural using these new technologies.
In a blog post on Wired.com titled: “Does Your Tech Make You Feel Superhuman?” Tom Chatfield sees multi-user interfaces as “superhuman” in that it makes users feel a sense of power when using them. In his post, he mentions the use of skeuomorphic interaction design, meaning that "elements of design include structures that serve little or no purpose in the artifact fashioned from the new material but was essential to the object made by the original material” (Chatfield). Although skeuomorphs are unnecessary in designing multi-user interfaces for collaboration, they still allow for a more natural feel to these “superhuman” technologies. Including features such as the use of digital pens and erasers (like in Smart Board designs) in multi-user interface designs can be beneficial in creating awareness for collaborative work on these devices. However, as Yuill and Rogers point out, there must be some sort of level of constraint for these mechanisms and these constraints can arise from various sources, such as physical capabilities or in-capabilities.
So, in designing multi-user interfaces for collaborative work, it is crucial to consider different social aspects that may affect user abilities. Yuill and Rogers believe that in order to successfully design multi-user interfaces it is important to recognize groups with difficulties in collaborative tasks such as, young children learning to collaborate. By studying these groups, discoveries on how to make multi-user interfaces more beneficial in group situations becomes apparent. After studying three different groups, Yuill and Rogers have come up with three different behavioral mechanisms that must be considered in order to create a successful, more ‘natural,’ multi-user interface design. These mechanisms include: “high awareness of others’ actions and intentions, high control over the interface and high availability of background information” (Yuill, Rogers 2). The most crucial aspect in creating successful multi-user interfaces for the use of collaborative work group is the level of constraints put on these mechanisms. The levels of constraints proposed by Yuill and Rogers can be extremely beneficial to new multi-user interfaces and should be considered by the designers of upcoming devices. However, in addition to constraining these proposed mechanisms to benefit collaborative multi-user interfaces, skeuomorphic interactive design elements should also be included in order to create a more ‘natural’ device along with consistent navigational themes that empower its users. 

Thursday, September 18, 2014

White Papers: An Accomodated Citizens' Genre


In a report by The National Law Center on Homelessness & Poverty and The National Coalition for the Homeless from July 2009 entitled “Homes Not Handcuffs: The Criminalization of Homelessness in U.S. Cities,” a lot of statistical information is provided on how the criminalization of homeless people is a detriment to society as a whole. As opposed to helping these people who are “doing things they need to do to survive,” by providing them with shelter, the government is choosing to enforce more laws that “appear to have the purpose of moving homeless people out of sight, or even out of a given city.” (9). Since this white paper is providing the public with insight on how the government is choosing to treat the issues with homeless people on the streets, it is in turn defining itself as a citizens’ genre while also having explanatory aspects. As the report continues to inform the public on what is going on within their communities, it also provides recommendations on how to potentially resolve the issues presented.

This white paper not only addresses issues of homeless criminalization, it also includes some hefty accommodations for public readership. Sources for this report originate from other complex reports and surveys on the matter. In order for the public to understand this information and why it is important to them, it must be transformed into simpler, more understandable terms. To enable the understanding of this information for the public there must be, as Jeanne Fahnestock states, a shift in genre and rhetorical situation. In her article, “Accommodating Science: The Rhetorical Life of Scientific Facts,” Fahnestock claims that “instead of simply reporting facts for a different audience, scientific accommodations are overwhelmingly epideictic; their main purpose is to celebrate rather than validate… they must be explicit in their claims about the value of the scientific discoveries they pass along.” (Fahnstock 279). So in transforming information for their public audience, The National Law Center on Homelessness & Poverty and The National Coalition for the Homeless had to change their rhetorical situation in order to suit their new audience. They do so by not only simplifying their information, but also by applying the information to a situation that suits the citizen genre.

By clearly stating the issues with the criminalization of homeless people and applying simplified statistics that support their claim, this white paper successfully accommodates their material to suit their public audience. The way they intertextually fit their data into their report allows the reader to understand the information provided. By arguing from the stasis of value for the stasis of cause, this report allows for potential change in the future. With the potential for change that this report allows for the public community, it therefore falls under the category of citizens’ genre.

Wednesday, September 10, 2014

The Struggles of Popularizing Science

            
In order to produce a popularized article on scientific subjects, it is necessary for the information to undergo a rhetorical alteration. In her article, “Accommodating Science: The Rhetorical Life of Scientific Facts,” Jeanne Fahnestock includes multiple examples where original scientific reports undergo numerous alterations in order to become more suitable for the eyes of the general public. Amongst the three interrelated observations Fahnstock makes about her examples, she mentions that there must be a focus on “the genre shift that occurs between, the original presentation of a scientist’s work and its popularization…” (Fahnestock 277). This observation helps to understand how scientific reports must change in order to suit a generalized audience by looking at the genre shift that occurs in the process. When focusing on the genre shift that occurs between the work of a scientist and the accommodated public, it is clear that there is a shift from forensic to epideictic delivery. This shift is due to the fact that scientific accommodations serve to celebrate rather than validate.

Clearly, “Scientific papers are largely concerned with establishing the validity of the observations they report…” (Fahnestock 278), since they are focusing on compiling data for their own discourse community to compare to. However, when accommodating this information for public readership the data must be certain and the significance of the information must be clear. This is one of the main issues of accommodating science, the information from the original scientific reports become glamorized to suit a general audience. In order to keep the general audience pulled in, accommodators must insert uniqueness and rarity into the subjects they are reporting; they search for extremes in order to heighten the significance of its report. (Fahnestock 288). However, “striving for drama causes the genre to shade into the field of poetic or mythic utterance.” (Killingsworth, Palmer 135). So, by glamorizing the information it becomes, in a way, falsified; although the information is there, assumptions are made and conclusions are inferred.

This glamorization correlates with what M. Jimmie Killingsworth and Jacqueline S. Palmer discuss in their book, Ecospeak, specifically in a chapter entitled “Transformations of Scientific Discourse in the News Media.” Killingsworth and Palmer state that “…science must solve human problems and thus transcend its own version of objectivism, its own self-definition, must become engineering if it is worthy of being reported in the press.” (Killingsworth, Palmer 135). So, for a scientific article to even become newsworthy, it would have to solve some sort of issue or spark some sort of interest within the general population. An issue must be resolved within the article since, “…the public as readers would move the information themselves into the higher stases and ask, ‘Why is this happening? Is it good or bad? What should we do about it?’” (Fahenstock 292). The general public wants to know what the exact outcome of the situation is going to be or else readership is lost. Aside from dramatizing information, accommodators are also jumping to conclusions in order to fulfill the general audience’s need to know the outcome of a situation.

In order to accommodate scientific knowledge to the general public, not only does the scientific jargon in the original reports need to be simplified, but there also needs to be a shift in genre. After shifting from forensic to epideictic delivery, the accommodator needs to find a way to make the situation unique; a way to glamorize the information so that it can reach a widespread audience. However, there also needs to be a firm conclusion for the general public to acknowledge, otherwise, interest and significance is lost. Why does accommodating science for the general public have to be such a long process? Why can’t the accommodators just translate scientific jargon into simple dialect for general readers? Why does the accommodator have to include “mythic utterance” as opposed to valid facts in their popularizations?

Tuesday, September 9, 2014

The Rhetorical Situation of The Future of Reading



In Jonah Lehrer’s article, "The Future of Reading," there seems to be a large concern with the technological advancement of e-readers. While analyzing the rhetorical situation of this article, it is clear that Jonah perceives a flaw with how easy it is to read a text that is perfectly printed on a screen; causing reading to become an unconscious, effortless act. “Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra work – the slight cognitive frisson of having to decipher the words – wakes us up” (Lehrer). The fact that e-readers allow reading to become an unconscious activity is precisely Lehrer’s exigence for writing this article. As the rhetor, however, Lehrer is not the one who originated this discourse. As Keith Grant-Davie explains in his article, "Rhetorical Situations and Their Constituents:" “We can distinguish those who originated the discourse… from those who are hired to shape and deliver the message…” (Grant-Davie 269). This topic has already been largely discussed in the writing community ever since the first Kindle was released; Lehrer is clearly responding to a topic that is already present in his discourse community.


            That being said, Lehrer is not a “creative genius,” he is merely stating his opinion on an already present subject and offering a potential solution to the problem. The fact that Lehrer pieces together fragments of pre-existing texts to build on his own discourse is what makes this article intertextual. As James E. Porter puts it in his article, "Intertextuality and the Discourse Community," “Examining texts ‘intertextually’ means looking for ‘traces,’ the bits and pieces of Text which writers or speakers borrow and sew together to create new discourse” (Porter 34). Although Lehrer’s article is intertextual in Porter’s terms, he still fails to have a consistently strong ethos within his discourse community. The fact that Lehrer has failed to successfully make use of his sources has caused his credibility to drop significantly. This decrease in credibility can be seen as a constraint on his writing since “what we have already written must constrain what we write next” (Grant-Davie 273). However, within the bounds of his article, e-readers may be seen as the constraint to his exigence. Lehrer does recognize “the astonishing potential of digital texts and e-readers. For [him], the most salient fact is this: It’s never been easier to buy books, read books, or read about books you might want to buy” (Lehrer). On the other hand, it is the e-reader that is making the content that we read easy to perceive; Lehrer worries that “before long, we’ll become so used to the mindless clarity of e-ink – to these screens that keep on getting better – that the technology will feedback onto the content, making us less willing to endure harder texts. We’ll forget what it’s like to flex those dorsal muscles, to consciously decipher a literate clause” (Lehrer)  So, since his main concern is losing the ability to read consciously, not just skimming screens, Lehrer’s constraint thus becomes the e-reader itself. It is the e-reader that is enabling people to read quickly and easily; and therefore, unconsciously.


            Although Lehrer uses small amounts of sources to back up his main point, his article seems to be mostly opinion based. By doing so, Lehrer seems to be reaching out to an audience that isn’t necessarily a part of the sci-tech community. He goes off on tangents and inserts little personal tidbits; for example, when discussing the technological advancement of the clarity of screens he says “(I still can’t believe that people watched golf before there were HD screens. Was the ball even visible? For me, the pleasure of televised golf is all about the lush clarity of grass” (Lehrer). This sentence is irrelevant to his article, except that it allows him to connect to his audience on a personal level by reflecting on his personal views of the advancement of screen clarity.


            Even though Lehrer fails to offer a solution to his exigence, he does, however, state his main concerns with the e-reader and uses other sources to try and communicate his concerns with his discourse community. This article allows the audience to weigh the pros and cons of the e-reader as Lehrer discusses the positives and negatives of his situation. Grant-Davie states that “Rhetors may invite audiences to accept new identities for themselves, offering readers a vision not of who they are but of who they could be” (Grant-Davie 271). In this case, Lehrer invites his audience to either share his views or to opinionate against them. Perhaps, while beginning the article, the reader might have been pro e-reader; however, after reading Lehrer’s views and the source from a neuroscientist that he provides, perhaps the reader might have second guessed their decision on the e-reader stance. This is the most important part of a rhetorical situation, the outcome of how the audience perceives it, specifically after being given the information needed to make a decision.