November 29 - December 5
OSCON Images, Day 4 - O'Reilly Media
Topic 21: The Uneasy Alliance: Free Software vs Open Source
Analyse both free software and open source approach in your blog. If you prefer one, provide your arguments.
Before analyzing the difference between free software and open source software, it is important to note that free software is not the same as freeware, software available at zero price. Though the definition of freeware covers both proprietary and closed source software that is available for use at no cost as well as free and open source software, in common usage it tends to refer more often to proprietary and closed source software that is available for use at no cost. Closed source software is a software distributed without its source code.
The rest is a bit more complicated. There are two different movements, free software movement (Richard Stallman) and the open source movement (Bruce Perens), that can be viewed as two political camps within the same, free software community. There are very few cases of software that is free software but is not open source software, and vice versa. The difference in the terms is where they place the emphasis. Free software is defined in terms of giving the user freedom. This reflects the goal of the free software movement. According to Richard Stallman, "When we call software "free", we mean that it respects the users' essential freedoms: the freedom to run it, to study and change it, and to redistribute copies with or without changes. This is a matter of freedom, not price, so think of "free speech", not "free beer"." Open source highlights that the source code is viewable to all. Proponents of the term usually emphasize the quality of the software and how this is caused by the development models which are possible and popular among free and open source software projects. It focuses on technology rather than ethics. As Richard Stallman puts it, "Open source is a development methodology; free software is a social movement."
Some free software advocates use the term Free and Open Source Software (FOSS) as an inclusive compromise, drawing on both philosophies to bring both free software advocates and open source software advocates together to work on projects with more cohesion. Some users believe that a compromise term encompassing both aspects is ideal, to promote both the user's freedom with the software and also to promote the perceived superiority of an open source development model. Indeed, it seems of little importance to anyone else apart from the proponents of these two sides. What we should really consider important is the availability of source code as explained by Chris Pirillo in the video I embedded in my previous post. There is a huge reason why open is better than just free, but in his case he was referring to freeware (closed software available at no cost) rather than free software.
The four essential freedoms, specifically freedoms 1 and 3, that define free software require source code to be available, because studying and modifying software without its source code is highly impractical. Studying and modifying software in order to improve it (and make it better) with the help of collaborative development is what both movements and their proponents should strive for, rather than arguing whether the ethical or technological approach is more appropriate. The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits, is a very whole-souled goal of the free software movement. On the other hand, the open source philosophy is perhaps a bit more constructive as it focuses on the strengths of peer-to-peer development while also spreading the freedom to use, study, change, and improve software through the availability of its source code. However, while open source may have a lot of good stuff going for it, it does not protect the fundamental freedoms in a way that free software advocates do. I think it is fundamentally important to provide a better understanding of both movements and their benefits, without having to wage consequently a war against one another.
One thing that I find quite contradictional is that free software licences actually place a restriction that open source does not. Open source licenses do not restrict redistribution of identical or modified copies. Free software licenses place the restriction that redistribution must be under a free software license. This leads to an asymmetric incompatibility between free software and open source: while it is possible to use open source code in free software projects (e.g., for the Linux Operating System to copy drivers from FreeBsd), the inverse is not allowed. Isn't this a violation of freedom?
Another question is whether programmers should deserve or ask for rewards for their creativity. Stallman says that this is where people get the misconception of "free". There is no wrong in requesting rewards for their works. Restricting and controlling the user's decisions on use is the actual violation of freedom. Stallman defends that in some cases, monetary incentive is not necessary for motivation since the pleasure in expressing creativity is a reward in itself.
Free vs open: what's the difference?
Paul Hudson explores the gap between the philosophies on TechRadar.
Topic 22: Creative Commons and Free Content Models
Monday, November 29, 2010
Friday, November 26, 2010
Task 10: Applying activity theory into practice
PLENK 2010 – massive open online course (MOOC)
Our course New Interactive Environments (NIE)
Here is the step by step instruction:
Our course New Interactive Environments (NIE)
Here is the step by step instruction:
- Have a look at the PLENK2010 course and try to understand its set up, interactions, subjects, rules, instruments, etc (components of activity system). If you want to, you can also register to the course. Describe the course according to the activity theory and produce its activity system.
- Think about the course New Interactive Environments and describe the course according to the activity theory and produce its activity system.
- Compare the two described courses. First, think of the drawbacks of this activity theory while trying to analyse digitally mediated courses, the challenges and limitations you faced while implementing activity theory into practice. Secondly, have a look at these two activity systems and bring forth the differences and similarities of these courses (if any).
Thursday, November 25, 2010
Week 10: Ethics and Law in New Media
November 22-28
Topic 19: One Microsoft Way: the World of Proprietary Software
What could the software licensing landscape look like in 2015? Write a short predictive analysis.
Economic downturns tend to accelerate change in the IT world. People with budgetary authority take a fresh look at what they are spending money on (including IT investments) and what to do differently going forward. Given the situation of the past couple of years, we are most likely to see the growth of open source software also in the years to come, and definitely in the next five.
The open source versus proprietary software battle will continue. There is nothing to stop organizations making a move en masse to open source (for example, Linux and Open Office versus Microsoft Windows and Office), especially in academia, government departments and non-profit organizations, particularly in nations that do not have deep pockets.
As smaller companies have already found open source software beneficial, more and more larger companies are showing the same trend. Using open source software and participating in open source communities to build it, helps to spread out the cost and risk with partners in those communities. "The currently underreported and future trend is the shift of the development of non-business-differentiating software within companies to open source," predicts Bruce Perens, creator of the Open Source Definition and co-founder of the Open Source Initiative, on InfoWorld. We are past the days when people asked if Linux or Apache was safe to depend on in business.
However, the issue is not so much whether open source will win or not, but rather how long it will take and what niches and markets will remain proprietary source. Those niches and markets will always be there, and always be significant – either for commercial, security, or market size reasons. But they will no longer be the mainstay of the software market. In the "open source era", the software revenue will come from services and support, not proprietary packages.
One market niche where proprietary software will continue to be popular is probably the one of creativity software (both web design and design in general, photography, digital art), with Adobe products leading the way. Because of iPad's great success, the Apple's iOS software, with the latest version 4.2.1, is another example of closed software that is very likely to stay around. However, the closed and proprietary nature of iOS has garnered criticism, particularly by digital rights advocates such as the Electronic Frontier Foundation, computer engineer and activist Brewster Kahle, Internet-law specialist Jonathan Zittrain, and the Free Software Foundation.
As for the free software licensing, I believe the GNU GPL analyzed in the previous task, will be most widely used to ensure the freedoms of copyleft, even when the work is changed or added to. The permissive free software licenses, such as the BSD licenses, are of course an alternative, putting works licensed under them relatively closer to the public domain.
The thing that most evoked my feelings in favor of open source, was pointed out by Chris Pirillo, founder and maintainer of Lockergnome, in one of his YouTube videos The Future of Software is Open Source. He argues that there is a huge reason why open is better than just free.
Trying to retell, it is basically what he says: If I create a proprietary (closed) piece of software, and refuse to share the code with others, that goes away when I die. When you share code because it is an open collaboration, there is always room for someone to step in and take over. If one developer knows of a way to make a piece of the software work better, they can add to it when it is open-source. An open-source program can be enhanced upon until the end of time, basically. So your value in this world will be seen long after you are gone.
Topic 20: The Digital Enforcement
Write a short analysis about applicability of copying restrictions – whether you consider them useful, in which cases exceptions should be made etc.
Everything to 100% can never be open and free. The world needs some copy protection and copying restrictions – to a certain and reasonable degree.
What makes open source good, is the passionate open source community – one with significant influence on technology directions and options, working together to solve problems and share the fruits of their labors with others. But people write for many reasons. Some for pleasure, others for money. An author wishing to profit from their work must find some way to limit access to that work to customers willing to pay for the privilege.
As I wrote in one of the previous posts on intellectual property, copyright is meant to protect creators. New innovations are often both creatively and expensive endeavors. The creation of copyright laws has protected innovators from investing huge amounts of time and money into a project, only to have it stolen. Creative Commons, a non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share, also states that it is not anti-copyright per se, but argues for copyright to be managed in a more flexible and open way.
Take for example music or movies. We are all willing to pay tribute and contribution to the artists we appreciate for their work that we find entertaining. At least those of us with right values do. Once we have done that, copying restrictions shouldn't be overly constrictive, as it used to be, for example, with the music files bought from iTunes before. For the same reason, in January 2009, Apple removed anticopying restrictions on all of the songs in its popular iTunes Store and allowed record companies to set a range of prices for them. With the copying restrictions removed, people are able to shift the songs they buy on iTunes among computers, phones and other digital devices. Industry pundits had long pointed to DRM as one culprit for the music companies' woes, saying it alienated some customers while doing little to slow piracy on file-sharing networks.
Yet, the case with science business shows us where exceptions should be (and could be) made, perhaps not by moving copying restrictions in full but by making information more openly accessible.
Topic 19: One Microsoft Way: the World of Proprietary Software
What could the software licensing landscape look like in 2015? Write a short predictive analysis.
Economic downturns tend to accelerate change in the IT world. People with budgetary authority take a fresh look at what they are spending money on (including IT investments) and what to do differently going forward. Given the situation of the past couple of years, we are most likely to see the growth of open source software also in the years to come, and definitely in the next five.
The open source versus proprietary software battle will continue. There is nothing to stop organizations making a move en masse to open source (for example, Linux and Open Office versus Microsoft Windows and Office), especially in academia, government departments and non-profit organizations, particularly in nations that do not have deep pockets.
As smaller companies have already found open source software beneficial, more and more larger companies are showing the same trend. Using open source software and participating in open source communities to build it, helps to spread out the cost and risk with partners in those communities. "The currently underreported and future trend is the shift of the development of non-business-differentiating software within companies to open source," predicts Bruce Perens, creator of the Open Source Definition and co-founder of the Open Source Initiative, on InfoWorld. We are past the days when people asked if Linux or Apache was safe to depend on in business.
However, the issue is not so much whether open source will win or not, but rather how long it will take and what niches and markets will remain proprietary source. Those niches and markets will always be there, and always be significant – either for commercial, security, or market size reasons. But they will no longer be the mainstay of the software market. In the "open source era", the software revenue will come from services and support, not proprietary packages.
One market niche where proprietary software will continue to be popular is probably the one of creativity software (both web design and design in general, photography, digital art), with Adobe products leading the way. Because of iPad's great success, the Apple's iOS software, with the latest version 4.2.1, is another example of closed software that is very likely to stay around. However, the closed and proprietary nature of iOS has garnered criticism, particularly by digital rights advocates such as the Electronic Frontier Foundation, computer engineer and activist Brewster Kahle, Internet-law specialist Jonathan Zittrain, and the Free Software Foundation.
As for the free software licensing, I believe the GNU GPL analyzed in the previous task, will be most widely used to ensure the freedoms of copyleft, even when the work is changed or added to. The permissive free software licenses, such as the BSD licenses, are of course an alternative, putting works licensed under them relatively closer to the public domain.
The thing that most evoked my feelings in favor of open source, was pointed out by Chris Pirillo, founder and maintainer of Lockergnome, in one of his YouTube videos The Future of Software is Open Source. He argues that there is a huge reason why open is better than just free.
Trying to retell, it is basically what he says: If I create a proprietary (closed) piece of software, and refuse to share the code with others, that goes away when I die. When you share code because it is an open collaboration, there is always room for someone to step in and take over. If one developer knows of a way to make a piece of the software work better, they can add to it when it is open-source. An open-source program can be enhanced upon until the end of time, basically. So your value in this world will be seen long after you are gone.
You are an absolute fool to believe that the future of software is anything but open. Are we there now? Obviously not... but in good time, proprietary software will become a thing of the past. – Chris Pirillo |
Topic 20: The Digital Enforcement
Write a short analysis about applicability of copying restrictions – whether you consider them useful, in which cases exceptions should be made etc.
Everything to 100% can never be open and free. The world needs some copy protection and copying restrictions – to a certain and reasonable degree.
What makes open source good, is the passionate open source community – one with significant influence on technology directions and options, working together to solve problems and share the fruits of their labors with others. But people write for many reasons. Some for pleasure, others for money. An author wishing to profit from their work must find some way to limit access to that work to customers willing to pay for the privilege.
As I wrote in one of the previous posts on intellectual property, copyright is meant to protect creators. New innovations are often both creatively and expensive endeavors. The creation of copyright laws has protected innovators from investing huge amounts of time and money into a project, only to have it stolen. Creative Commons, a non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share, also states that it is not anti-copyright per se, but argues for copyright to be managed in a more flexible and open way.
Take for example music or movies. We are all willing to pay tribute and contribution to the artists we appreciate for their work that we find entertaining. At least those of us with right values do. Once we have done that, copying restrictions shouldn't be overly constrictive, as it used to be, for example, with the music files bought from iTunes before. For the same reason, in January 2009, Apple removed anticopying restrictions on all of the songs in its popular iTunes Store and allowed record companies to set a range of prices for them. With the copying restrictions removed, people are able to shift the songs they buy on iTunes among computers, phones and other digital devices. Industry pundits had long pointed to DRM as one culprit for the music companies' woes, saying it alienated some customers while doing little to slow piracy on file-sharing networks.
Yet, the case with science business shows us where exceptions should be (and could be) made, perhaps not by moving copying restrictions in full but by making information more openly accessible.
Tuesday, November 23, 2010
Flowtown: Infographics
I recently discovered and absolutely love the different infographics by Flowtown.
And they provide code to embed these graphics on your site (share if you want to).
Here is one: Who are the millennials?
There's been a lot of talk about who millennials are and how different they are from 'Gen-Xers' and the 'Baby Boomers', but a lot of this commentary has to do with attitudes and priorities. Pew has recently done a study on millenials and how they access media and technology, which proved the basis for the graphic below. It is primarily concerned with how media and technology play a major role in shaping who millennials are, and how they interact with one another. It also addresses what's important to them, how they value marriage and education, and other interesting facts and figures.
Flowtown - Social Media Marketing Application
And one more: The subjective life of a computer user
From Commodore to iPad, the members of Generation X saw some incredible advancements in technology and computing. This techno-lution shaped them into the mass-consuming technophiles we know them as today. Let's take a look at the journey of this plugged-in generation.
Flowtown - Social Media Marketing Application
And they provide code to embed these graphics on your site (share if you want to).
Here is one: Who are the millennials?
There's been a lot of talk about who millennials are and how different they are from 'Gen-Xers' and the 'Baby Boomers', but a lot of this commentary has to do with attitudes and priorities. Pew has recently done a study on millenials and how they access media and technology, which proved the basis for the graphic below. It is primarily concerned with how media and technology play a major role in shaping who millennials are, and how they interact with one another. It also addresses what's important to them, how they value marriage and education, and other interesting facts and figures.
Flowtown - Social Media Marketing Application
And one more: The subjective life of a computer user
From Commodore to iPad, the members of Generation X saw some incredible advancements in technology and computing. This techno-lution shaped them into the mass-consuming technophiles we know them as today. Let's take a look at the journey of this plugged-in generation.
Flowtown - Social Media Marketing Application
Socialnomics: Social Media Revolution
Is social media a fad? |
|
Monday, November 22, 2010
Is iPad-Only Newspaper the Future of Journalism?
Apple and News Corp are reportedly set to launch The Daily, the first iPad-only news publication, Mashable reports.
The Daily will have no website and no print edition; the only way to get it will be to download it via iPad application for $0.99 each. The publication will not just be a newspaper formatted for the tablet, though; it will incorporate a great deal of video content and utilize the iPad's technology in ways that no newspaper or website currently accomplishes.
Rupert Murdoch, head of the media giant News Corp, a visionary and kingpin of news, seems to know what he is getting into. Here's a simple calculation. If he can just get a fraction of the eventual iPad market (5% of 40 million iPad owners by the end of 2011), then his digital publication will succeed.
But will people stop reading newspaper websites in favor of the iPad? And how rapidly will fall their respective paper editions? It seems that 2011 is going to be another interesting year for the world of journalism. And it sure seems that the future of news isn't in propping up print publications, but creating truly immersive digital experiences.
Evidently, the flagship of Estonian journalism is aware of it, too.
Eesti Ekspress recently introduced its iPad version, now available in Apple Store. The birth of the weekly at the end of the 1980s and the transition of the whole edition to recycled paper were both revolutionary. Today, Ekspress is the first Estonian newspaper carrying out the iPad revolution.
The Daily will have no website and no print edition; the only way to get it will be to download it via iPad application for $0.99 each. The publication will not just be a newspaper formatted for the tablet, though; it will incorporate a great deal of video content and utilize the iPad's technology in ways that no newspaper or website currently accomplishes.
Rupert Murdoch, head of the media giant News Corp, a visionary and kingpin of news, seems to know what he is getting into. Here's a simple calculation. If he can just get a fraction of the eventual iPad market (5% of 40 million iPad owners by the end of 2011), then his digital publication will succeed.
But will people stop reading newspaper websites in favor of the iPad? And how rapidly will fall their respective paper editions? It seems that 2011 is going to be another interesting year for the world of journalism. And it sure seems that the future of news isn't in propping up print publications, but creating truly immersive digital experiences.
Evidently, the flagship of Estonian journalism is aware of it, too.
Eesti Ekspress recently introduced its iPad version, now available in Apple Store. The birth of the weekly at the end of the 1980s and the transition of the whole edition to recycled paper were both revolutionary. Today, Ekspress is the first Estonian newspaper carrying out the iPad revolution.
Rumor Has It: iPad 2 Is On Its Way
Rumor has it that suppliers, model specifics, and new materials have already been chosen for the iPad 2. A new patent has also come to light that introduces some interesting possibilities about what materials might be used in future iPad casing designs, reports gigaom.com.
Other iPad rumors from around the tech world include the addition of front- and back-facing cameras for use with FaceTime, three-axis gyroscope for games, a Cortex-A9 chip CPU, a mini-USB port, and more memory. Some sites are also predicting a 7-inch model to bridge the gap between the iPhone/iPod touch and the current large iPad screen, though Steve Jobs recently went on the record against that idea.
The speculated release time for iPad 2 is April 2011, a year after the first iPad was introduced.
Other iPad rumors from around the tech world include the addition of front- and back-facing cameras for use with FaceTime, three-axis gyroscope for games, a Cortex-A9 chip CPU, a mini-USB port, and more memory. Some sites are also predicting a 7-inch model to bridge the gap between the iPhone/iPod touch and the current large iPad screen, though Steve Jobs recently went on the record against that idea.
The speculated release time for iPad 2 is April 2011, a year after the first iPad was introduced.
Thursday, November 18, 2010
Week 9: Ethics and Law in New Media
November 15-21
(ↄ) All rights reversed, all wrongs reserved
Topic 17: The Hacker Approach: Development of Free Licenses
Study the GNU GPL and write a short blog essay about it. You may use the SWOT analysis model (strengths, weaknesses, opportunities, threats).
The GNU General Public License (GNU GPL or simply GPL) is the most widely used free software license, originally written by Richard Stallman for the GNU Project. The GNU website states: "Free software is a matter of liberty, not price. To understand the concept, you should think of "free" as in "free speech", not as in "free beer"."
The GPL is the first copyleft license for general use, which means that derived works can only be distributed under the same license terms. While copyright law gives software authors control over copying, distribution and modification of their works, the goal of copyleft is to give all users of the software the freedom to carry out these activities. Under this philosophy, the GPL grants the recipients of a computer program the rights of the free software definition and uses copyleft to ensure the freedoms are preserved, even when the work is changed or added to.
In this way, copyleft licenses are distinct from other types of free software licenses, which do not guarantee that all "downstream" recipients of the program receive these rights, or the source code needed to make them effective. In particular, permissive free software licenses such as BSD allow re-distributors to remove some or all these rights, and do not require the distribution of source code. While BSD advocates find copyleft restrictive (in regards to the GPL's tendency to absorb BSD licensed code without allowing the original BSD work to benefit from it), some observers believe that the strong copyleft provided by the GPL was crucial to the success of GNU/Linux, giving the programmers who contributed to it the confidence that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
Where there is success, there is often criticism. Because any works derived from a copyleft work must themselves be copyleft when distributed, they are said to exhibit a so-called viral phenomenon. Microsoft vice-president Craig Mundie remarked, "This viral aspect of the GPL poses a threat to the intellectual property of any organization making use of it." In another context, Steve Ballmer declared that code released under GPL is useless to the commercial sector (since it can only be used if the resulting surrounding code becomes GPL), describing it thus as "a cancer that attaches itself in an intellectual property sense to everything it touches". On the other side, as seen by K. Lotan in Efficiency Analysis of the Law and Order of the GPL, many corporations have discovered that they can utilize the open source model for their commercial needs, inter alia, as a marketing and business strategy, where the profit flows from ancillary products and services. Consequently, what began as an ideologically motivated approach broadened into a widespread opportunity for commercial firms to embrace a viable alternative to proprietary software.
Lotan points up the advantages. He writes, "Whether created by a commercial firm, the community, or both, the open source paradigm has numerous advantages. First, it increases the dissemination of information by offering a software product which costs nothing. It avoids users "lock in" problem, a dilemma often associated with proprietary products that, due to the high costs of switching systems, can potentially lock users into staying with one company's system while being charged monopoly prices and subjected to monopoly licensing terms. In this way the open source system creates a threat that nudges proprietary systems into the zone of competition. In addition, the communal method of creation possesses specific advantages that closed systems lack, including more dynamism and a better use of employee skills, along with enhanced quality control." As Eric Raymond (a computer programmer, author of The Cathedral and the Bazaar, and open source software advocate) states, "Given enough eyeballs, all bugs are shallow."
Challenges in the future
Stallman admits in his essay about the GNU Project that several challenges make the future of free software uncertain; meeting them will require steadfast effort and endurance. Hardware manufacturers increasingly tend to keep hardware specifications secret. This makes it difficult to write free drivers so that Linux and XFree86 can support new hardware. "We have complete free systems today, but we will not have them tomorrow if we cannot support tomorrow's computers," Stallman writes. Similarly, a nonfree library that runs on free operating systems acts as a trap for free software developers (for example, the nonfree GUI toolkit library, called Qt, used in a substantial collection of free software, the desktop KDE). The library's attractive features are the bait; if you use the library, you fall into the trap, because your program cannot usefully be part of a free operating system. If a program that uses the proprietary library becomes popular, it can lure other unsuspecting programmers into the trap. The worst threat we face comes from software patents, which can put algorithms and features off limits to free software for up to twenty years. The biggest deficiency in our free operating systems is not in the software – it is the lack of good free manuals that we can include in our systems. Documentation is an essential part of any software package; when an important free software package does not come with a good free manual, that is a major gap. We have many such gaps today.
Topic 18: The Millennium Bug in the WIPO Model
Find a good example of the "science business" described above and analyse it as a potential factor in the Digital Divide discussed earlier. Is the proposed connection likely or not? Blog your opinion.
An empirical study published in 2010 showed that of the total output of peer-reviewed articles roughly 20% could be found Openly Accessible. Chemistry (13%) had the lowest overall share of OA* of all scientific fields, Earth Sciences (33%) the highest. In medicine, biochemistry and chemistry gold publishing in OA journals was more common than the author posting of manuscripts in repositories. In all other fields author-posted green copies dominated the picture.
Let's take a look at the field of biotech. Thirty years ago it appeared as if biotech would not only revolutionize healthcare, but also radically improve the very process of R&D itself. This hasn't happened. Though some firms such as Amgen have created dramatic breakthroughs, the overall industry track record is poor – in aggregate, the sector has lost money during this period.
Sean Silverthorne asked in an interview with Prof Gary P. Pisano, author of the book Science Business: The Promise, the Reality, and the Future of Biotech, what went wrong. Providing answers, Pisano points to systemic flaws as well as unhealthy tensions between science and business.
Underscoring the tensions between science and business inherent in a science-based business, Pisano explains: "Science and business work differently. They have different cultures, values, and norms. For instance, science holds methods sacred; business cherishes results. Science should be about openness; business is about secrecy. Science demands validity; business requires utility. So, the tensions are deep. What has happened is that we have tried to mash these two worlds together in biotech and may not be doing either very well. Science could be suffering and business certainly is suffering. If you try to take something that is science, and then jam it into normal business institutions, it just doesn't work that well for either science or business."
Therefore, the science business (or the business of science) not only potentially expands the global digital divide, but it sets major limits to R&D in several important industries. To improve the relationship between science and business, Pisano suggests to find partners that truly believe in long-term, committed relationships, not those who are looking to diversify their risks. He also argues that integration matters a lot, which means you have to organize your R&D in a truly integrated fashion.
Also in the EU, experts call for new approach to research and innovation in Europe.
-----------------------------------------------------------------------
* Open Access (OA) can be provided in two ways:
For the most part, the direct users of research articles are other researchers. Open access helps researchers as readers by opening up access to articles that their libraries do not subscribe to. One of the great beneficiaries of open access may be users in developing countries, where currently some universities find it difficult to pay for subscriptions required to access the most recent journals. All researchers benefit from OA as no library can afford to subscribe to every scientific journal and most can only afford a small fraction of them. Open access extends the reach of research beyond its immediate academic circle. An OA article can be read by anyone – a professional in the field, a researcher in another field, a journalist, a politician or civil servant, or an interested hobbyist.
(ↄ) All rights reversed, all wrongs reserved
Topic 17: The Hacker Approach: Development of Free Licenses
Study the GNU GPL and write a short blog essay about it. You may use the SWOT analysis model (strengths, weaknesses, opportunities, threats).
The GNU General Public License (GNU GPL or simply GPL) is the most widely used free software license, originally written by Richard Stallman for the GNU Project. The GNU website states: "Free software is a matter of liberty, not price. To understand the concept, you should think of "free" as in "free speech", not as in "free beer"."
The GPL is the first copyleft license for general use, which means that derived works can only be distributed under the same license terms. While copyright law gives software authors control over copying, distribution and modification of their works, the goal of copyleft is to give all users of the software the freedom to carry out these activities. Under this philosophy, the GPL grants the recipients of a computer program the rights of the free software definition and uses copyleft to ensure the freedoms are preserved, even when the work is changed or added to.
In this way, copyleft licenses are distinct from other types of free software licenses, which do not guarantee that all "downstream" recipients of the program receive these rights, or the source code needed to make them effective. In particular, permissive free software licenses such as BSD allow re-distributors to remove some or all these rights, and do not require the distribution of source code. While BSD advocates find copyleft restrictive (in regards to the GPL's tendency to absorb BSD licensed code without allowing the original BSD work to benefit from it), some observers believe that the strong copyleft provided by the GPL was crucial to the success of GNU/Linux, giving the programmers who contributed to it the confidence that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
Where there is success, there is often criticism. Because any works derived from a copyleft work must themselves be copyleft when distributed, they are said to exhibit a so-called viral phenomenon. Microsoft vice-president Craig Mundie remarked, "This viral aspect of the GPL poses a threat to the intellectual property of any organization making use of it." In another context, Steve Ballmer declared that code released under GPL is useless to the commercial sector (since it can only be used if the resulting surrounding code becomes GPL), describing it thus as "a cancer that attaches itself in an intellectual property sense to everything it touches". On the other side, as seen by K. Lotan in Efficiency Analysis of the Law and Order of the GPL, many corporations have discovered that they can utilize the open source model for their commercial needs, inter alia, as a marketing and business strategy, where the profit flows from ancillary products and services. Consequently, what began as an ideologically motivated approach broadened into a widespread opportunity for commercial firms to embrace a viable alternative to proprietary software.
Lotan points up the advantages. He writes, "Whether created by a commercial firm, the community, or both, the open source paradigm has numerous advantages. First, it increases the dissemination of information by offering a software product which costs nothing. It avoids users "lock in" problem, a dilemma often associated with proprietary products that, due to the high costs of switching systems, can potentially lock users into staying with one company's system while being charged monopoly prices and subjected to monopoly licensing terms. In this way the open source system creates a threat that nudges proprietary systems into the zone of competition. In addition, the communal method of creation possesses specific advantages that closed systems lack, including more dynamism and a better use of employee skills, along with enhanced quality control." As Eric Raymond (a computer programmer, author of The Cathedral and the Bazaar, and open source software advocate) states, "Given enough eyeballs, all bugs are shallow."
Challenges in the future
Stallman admits in his essay about the GNU Project that several challenges make the future of free software uncertain; meeting them will require steadfast effort and endurance. Hardware manufacturers increasingly tend to keep hardware specifications secret. This makes it difficult to write free drivers so that Linux and XFree86 can support new hardware. "We have complete free systems today, but we will not have them tomorrow if we cannot support tomorrow's computers," Stallman writes. Similarly, a nonfree library that runs on free operating systems acts as a trap for free software developers (for example, the nonfree GUI toolkit library, called Qt, used in a substantial collection of free software, the desktop KDE). The library's attractive features are the bait; if you use the library, you fall into the trap, because your program cannot usefully be part of a free operating system. If a program that uses the proprietary library becomes popular, it can lure other unsuspecting programmers into the trap. The worst threat we face comes from software patents, which can put algorithms and features off limits to free software for up to twenty years. The biggest deficiency in our free operating systems is not in the software – it is the lack of good free manuals that we can include in our systems. Documentation is an essential part of any software package; when an important free software package does not come with a good free manual, that is a major gap. We have many such gaps today.
Topic 18: The Millennium Bug in the WIPO Model
Find a good example of the "science business" described above and analyse it as a potential factor in the Digital Divide discussed earlier. Is the proposed connection likely or not? Blog your opinion.
An empirical study published in 2010 showed that of the total output of peer-reviewed articles roughly 20% could be found Openly Accessible. Chemistry (13%) had the lowest overall share of OA* of all scientific fields, Earth Sciences (33%) the highest. In medicine, biochemistry and chemistry gold publishing in OA journals was more common than the author posting of manuscripts in repositories. In all other fields author-posted green copies dominated the picture.
Let's take a look at the field of biotech. Thirty years ago it appeared as if biotech would not only revolutionize healthcare, but also radically improve the very process of R&D itself. This hasn't happened. Though some firms such as Amgen have created dramatic breakthroughs, the overall industry track record is poor – in aggregate, the sector has lost money during this period.
Sean Silverthorne asked in an interview with Prof Gary P. Pisano, author of the book Science Business: The Promise, the Reality, and the Future of Biotech, what went wrong. Providing answers, Pisano points to systemic flaws as well as unhealthy tensions between science and business.
- The biotech industry has underperformed expectations, caught in the conflicting objectives and requirements between science and business.
- The industry needs to realign business models, organizational structures, and financing arrangements so they will place greater emphasis on long-term learning over short-term monetization of intellectual property.
- A lesson to managers: Break away from a strategy of doing many narrow deals and focus on fewer but deeper relationships.
Underscoring the tensions between science and business inherent in a science-based business, Pisano explains: "Science and business work differently. They have different cultures, values, and norms. For instance, science holds methods sacred; business cherishes results. Science should be about openness; business is about secrecy. Science demands validity; business requires utility. So, the tensions are deep. What has happened is that we have tried to mash these two worlds together in biotech and may not be doing either very well. Science could be suffering and business certainly is suffering. If you try to take something that is science, and then jam it into normal business institutions, it just doesn't work that well for either science or business."
Therefore, the science business (or the business of science) not only potentially expands the global digital divide, but it sets major limits to R&D in several important industries. To improve the relationship between science and business, Pisano suggests to find partners that truly believe in long-term, committed relationships, not those who are looking to diversify their risks. He also argues that integration matters a lot, which means you have to organize your R&D in a truly integrated fashion.
Also in the EU, experts call for new approach to research and innovation in Europe.
- Research and innovation policy should focus on our greatest societal challenges.
- New networks, institutions and policies for open innovation should be encouraged.
- Spending on research, education and innovation should be increased, in part through bolder co-investment schemes.
- R&D and innovation programmes should be better coordinated and planned, both at EU level and among the Member States.
- Open competition should be standard in EU programmes.
-----------------------------------------------------------------------
* Open Access (OA) can be provided in two ways:
- "Green OA" is provided by authors publishing in any journal and then self-archiving their postprints in their institutional repository or on some other OA website. Green OA journal publishers endorse immediate OA self-archiving by their authors.
- "Gold OA" is provided by authors publishing in an open access journal that provides immediate OA to all of its articles on the publisher's website.
For the most part, the direct users of research articles are other researchers. Open access helps researchers as readers by opening up access to articles that their libraries do not subscribe to. One of the great beneficiaries of open access may be users in developing countries, where currently some universities find it difficult to pay for subscriptions required to access the most recent journals. All researchers benefit from OA as no library can afford to subscribe to every scientific journal and most can only afford a small fraction of them. Open access extends the reach of research beyond its immediate academic circle. An OA article can be read by anyone – a professional in the field, a researcher in another field, a journalist, a politician or civil servant, or an interested hobbyist.
Wednesday, November 17, 2010
Estonia Today: Communications Disrupted by Internet Crash
Internet connection, cell phones, land lines, cable television and ATM machines are down in many places throughout Estonia due to a system disruption at Elion, one of the country's main service providers. There is no Internet to find out why there is no Internet! |
Tuesday, November 16, 2010
Breaking Facebook News: The Modern Messaging System
Long in need of an upgrade, Facebook has finally given it a facelift and a ton of new features.
Zuckerberg & Co. has launched what it calls the "Modern Messaging System", a product that integrates IM, chat, SMS and e-mail into one inbox. Its central idea is that messaging should be simple and unified. To peek in, see Mashable's screenshot walkthrough.
Facebook: "See the messages that matter!"
Get Facebook messages, chats and texts all in the same place. Include e-mail by activating your optional @facebook.com e-mail address. See everything you've ever discussed with each friend as a single conversation. Focus on messages from your friends as messages from unknown senders and bulk e-mail go into the Other folder.
According to Zuckerberg, modern messaging is seamless, informal, immediate, personal, simple and minimal. The revamped Facebook Messages will be rolled out to the social network's 500+ million users in the next few months in an invite-only process. To request an invitation, go here.
"Where's my box of letters? It's locked up in a phone, it's locked up in email. It's not in one place. Until now."
You can read more on The Facebook Blog: See the messages that matter. And one quite opposite opinion: Facebook gets email: maybe not such a good idea. Some security issues are discussed here: Examining the security implications of Facebook Messages.
So how do you feel about the new social inbox?
Zuckerberg & Co. has launched what it calls the "Modern Messaging System", a product that integrates IM, chat, SMS and e-mail into one inbox. Its central idea is that messaging should be simple and unified. To peek in, see Mashable's screenshot walkthrough.
Facebook: "See the messages that matter!"
Get Facebook messages, chats and texts all in the same place. Include e-mail by activating your optional @facebook.com e-mail address. See everything you've ever discussed with each friend as a single conversation. Focus on messages from your friends as messages from unknown senders and bulk e-mail go into the Other folder.
According to Zuckerberg, modern messaging is seamless, informal, immediate, personal, simple and minimal. The revamped Facebook Messages will be rolled out to the social network's 500+ million users in the next few months in an invite-only process. To request an invitation, go here.
"Where's my box of letters? It's locked up in a phone, it's locked up in email. It's not in one place. Until now."
You can read more on The Facebook Blog: See the messages that matter. And one quite opposite opinion: Facebook gets email: maybe not such a good idea. Some security issues are discussed here: Examining the security implications of Facebook Messages.
So how do you feel about the new social inbox?
Monday, November 15, 2010
Task 9: Exploring activity theory as a framework for describing activity systems
New Interactive Enviroments
Basic structure of an activity
As explicated by Kuutti (1995), this systemic model, based on the conceptualization by Engeström (1987), contains three mutual relationships between subject, object and community. The relationship between subject and object is mediated by "tools", the relationship between subject and community is mediated by "rules" and the relationship between object and community is mediated by the "division of labour". A "tool" can be anything which is used in the transformation process, including both material tools and tools for thinking; "rules" cover both explicit and implicit norms, conventions and social relations within a community; "division of labour" refers to the explicit and implicit organization of a community as related to the transformation process of the object into the outcome.
Summarising activity theory and its potential for describing activity systems
Activity theory as a psychological meta-theory, paradigm, or framework is dynamic. It can be used by a variety of disciplines to understand the way people act. Founded by Leont'ev and Rubinshtein in the former USSR, activity theory became widely used in both theoretical and applied psychology, in areas such as education, training, ergonomics, and work psychology.
In the study of human-computer interaction and cognitive science, activity theory can be used to provide a framework for informing and evaluating design. In a framework derived from activity theory, any task, or activity, can be broken down into actions, which are further subdivided into operations. In a design context, using these categories can provide the designer with an understanding of the steps necessary for a user to carry out a task. One of the most frequently quoted books on the application of activity theory in human-computer interaction is written by Bonnie N. Nardi (1996).
Examples from computer and information studies are ample. A re-examination of information seeking behaviour in the context of activity theory, applies the key elements of activity theory to the conduct of information behaviour research, where the activity-theoretical approach provides a sound basis for the elaboration of contextual issues, for the discovering of organizational and other contradictions that affect information behaviour. Everyday inclusive Web design: an activity perspective, uses a method where the design activities of end-users and system designers are modeled using activity theory. Activity models and scenarios are used to describe and analyze design activity. From activity to learning: using cultural-historical activity theory to model school library programmes and practices, where activity theory, as a model that takes a developmental view of minds in context, is particularly well suited. The paper focuses on the activity theoretic concepts of contradictions and expansive learning as they relate to the development of best practices. Library activity is illustrated from multiple perspectives using a triangulated, qualitative approach.
Even though it might seem like another figure explaining a theoretical approach, it becomes more practical when actually applied. Real life situations always involve an intertwined and connected web of activities which can be distinguished according to their objects. Participation in connected activities having very different objects can cause tensions and distortions. The whole picture starts to make sense when filled with relevant details, such as the right subject, object and community with the mediating tools, rules and division of labour. However, Kuutti suggests that the model should be understood rather broadly. Each of the mediating terms is historically formed and open to further development. Let us take a closer look at one of the researches mentioned above.
A sample case: everyday inclusive Web design
Everyday inclusive Web design is a design perspective that promotes the creation of accessible content by everyday end-users, as website accessibility (especially with more and more content being created by non-professionals) is a problem that affects millions of people with disabilities.
As we know from our personal experience of being surrounded by an ever growing number of social applications (such as YouTube, Flickr) and networks (for example MySpace, Facebook), recent developments in Web technology have provided new opportunities for end-users to participate as designers. The study looks how professional system designers and end-users each engage in design activities within social software systems. As these activities are not independent, but rather interact with one another to produce the final site content, activity models and scenarios are used to describe and analyze the process and its possible contradictions.
I quite enjoyed how Kuutti points out that in activity theory contradictions are seen as sources of development; real activities are practically always in the process of working through some of such contradictions. It is through these contradictions (problems, ruptures, breakdowns, clashes, etc) that we improve ourselves or the systems and technologies we work with. In the case of everyday inclusive Web design, contradictions found between personal expression and publishing objectives in end-user design activity, as well as contradictions between perceived and actual number of system users with disabilities lead to inaccessible design. With the help of the activity model, these issues were clearly visualized in order to provide suggestions and solutions for improvement.
In short, the study was able conclude that accessibility of social software systems depends on the cooperative work of system designers and end-users. The underlying structure of social software systems may be altered to increase users' awareness of accessibility issues and to encourage accessible design practices. Both system designers and end-users must act with accessibility in mind in order for the end result to be accessible. End-users require access to design tools that support accessible design practices, while system designers must instruct end-users about available accessibility features.
An activity is the minimal meaningful context to understand individual actions. An activity system is a logical collection of activities designed to fulfill some purpose. |
As explicated by Kuutti (1995), this systemic model, based on the conceptualization by Engeström (1987), contains three mutual relationships between subject, object and community. The relationship between subject and object is mediated by "tools", the relationship between subject and community is mediated by "rules" and the relationship between object and community is mediated by the "division of labour". A "tool" can be anything which is used in the transformation process, including both material tools and tools for thinking; "rules" cover both explicit and implicit norms, conventions and social relations within a community; "division of labour" refers to the explicit and implicit organization of a community as related to the transformation process of the object into the outcome.
Summarising activity theory and its potential for describing activity systems
Activity theory as a psychological meta-theory, paradigm, or framework is dynamic. It can be used by a variety of disciplines to understand the way people act. Founded by Leont'ev and Rubinshtein in the former USSR, activity theory became widely used in both theoretical and applied psychology, in areas such as education, training, ergonomics, and work psychology.
In the study of human-computer interaction and cognitive science, activity theory can be used to provide a framework for informing and evaluating design. In a framework derived from activity theory, any task, or activity, can be broken down into actions, which are further subdivided into operations. In a design context, using these categories can provide the designer with an understanding of the steps necessary for a user to carry out a task. One of the most frequently quoted books on the application of activity theory in human-computer interaction is written by Bonnie N. Nardi (1996).
Examples from computer and information studies are ample. A re-examination of information seeking behaviour in the context of activity theory, applies the key elements of activity theory to the conduct of information behaviour research, where the activity-theoretical approach provides a sound basis for the elaboration of contextual issues, for the discovering of organizational and other contradictions that affect information behaviour. Everyday inclusive Web design: an activity perspective, uses a method where the design activities of end-users and system designers are modeled using activity theory. Activity models and scenarios are used to describe and analyze design activity. From activity to learning: using cultural-historical activity theory to model school library programmes and practices, where activity theory, as a model that takes a developmental view of minds in context, is particularly well suited. The paper focuses on the activity theoretic concepts of contradictions and expansive learning as they relate to the development of best practices. Library activity is illustrated from multiple perspectives using a triangulated, qualitative approach.
Even though it might seem like another figure explaining a theoretical approach, it becomes more practical when actually applied. Real life situations always involve an intertwined and connected web of activities which can be distinguished according to their objects. Participation in connected activities having very different objects can cause tensions and distortions. The whole picture starts to make sense when filled with relevant details, such as the right subject, object and community with the mediating tools, rules and division of labour. However, Kuutti suggests that the model should be understood rather broadly. Each of the mediating terms is historically formed and open to further development. Let us take a closer look at one of the researches mentioned above.
A sample case: everyday inclusive Web design
Everyday inclusive Web design is a design perspective that promotes the creation of accessible content by everyday end-users, as website accessibility (especially with more and more content being created by non-professionals) is a problem that affects millions of people with disabilities.
As we know from our personal experience of being surrounded by an ever growing number of social applications (such as YouTube, Flickr) and networks (for example MySpace, Facebook), recent developments in Web technology have provided new opportunities for end-users to participate as designers. The study looks how professional system designers and end-users each engage in design activities within social software systems. As these activities are not independent, but rather interact with one another to produce the final site content, activity models and scenarios are used to describe and analyze the process and its possible contradictions.
I quite enjoyed how Kuutti points out that in activity theory contradictions are seen as sources of development; real activities are practically always in the process of working through some of such contradictions. It is through these contradictions (problems, ruptures, breakdowns, clashes, etc) that we improve ourselves or the systems and technologies we work with. In the case of everyday inclusive Web design, contradictions found between personal expression and publishing objectives in end-user design activity, as well as contradictions between perceived and actual number of system users with disabilities lead to inaccessible design. With the help of the activity model, these issues were clearly visualized in order to provide suggestions and solutions for improvement.
In short, the study was able conclude that accessibility of social software systems depends on the cooperative work of system designers and end-users. The underlying structure of social software systems may be altered to increase users' awareness of accessibility issues and to encourage accessible design practices. Both system designers and end-users must act with accessibility in mind in order for the end result to be accessible. End-users require access to design tools that support accessible design practices, while system designers must instruct end-users about available accessibility features.
Sunday, November 14, 2010
Post 1: Review of the articles (multimedia, new media)
Week 44-45
Multimedia
Many names have emerged to describe computer-based forms, such as digital media, new media, hypermedia, or multimedia. The articles we had to read in week 44 looked at multimedia.
Rockwell and Mactavish combine several definitons with a focus on multimedia as a genre of communicative work and define multimedia work as a computer-based rhetorical artifact in which multiple media are integrated into an interactive whole. Multimedia works, whether born digital or remediated, share common characteristics including emerging modes of electronic production, distribution, and consumption. However, when Manovich describes new media (a term closely related to multimedia), he identifies it with the use of a computer for distribution and exhibition, rather than with production. To bring an example, photos which are put on a CD-ROM and require a computer to view them are considered new media, while the same photos printed as a book are not. Even though these photos might have been taken with a digital camera.
There are a number of ways to classify multimedia works. For example, we could classify them in terms of their perceived use, from entertainment to education. We could look at the means of distribution and the context of consumption of such works, from free websites that require a high-speed Internet connection, to expensive CD-ROM games that require the latest video cards to be playable. We could classify multimedia by the media combined, from remediated works that take a musical work and add synchronized textual commentary, to virtual spaces that are navigated. Other criteria for classification could be the technologies of production, the sensory modalities engaged, the type of organization that created the work, or the type of interactivity.
Packer and Jordan mention many of the same concepts (integration, interactivity, hypermedia, immersion, and narrativity) to determine the scope of multimedia's capabilities for expression. Integration can be seen as combining of artistic forms and technology into a hybrid form of expression, while hypertext is the linking of separate media elements into one another to create a trail of personal association. Interactivity is the ability of the user to directly manipulate and influence their experience of media, whereas immersion is the experience of entering into the simulation or suggestion of a three-dimesional (3-D) environment. And finally, narrativity can be seen as the aesthetic and formal strategies resulting in non-linear expressive forms.
One of the latest developments in multimedia systems has been the 3-D virtual space, a term that applies to computer-simulated environments that can simulate places in the real world, as well as in imaginary worlds. While virtual worlds are becoming increasingly popular with a growing number of users losing – or in some cases finding – themselves in the fantasy environments, Rockwell and Mactavish point out another direction in the industry. Looking at the developments of the past decades, they say: "The desktop multimedia systems of the 1990s are now being repackaged as portable devices that can play multiple media. The keyboard and the mouse are being replaced by input devices like pen interfaces on personal digital assistants (PDAs). Rather than immersing ourselves in virtual caves, we are bringing multimedia computing out of the office or lab and weaving it in our surroundings. The challenge to multimedia design is how to scale interfaces appropriately for hand-held devices like MP3 players and mobile phones." Both are true as multimedia is the answer to multiple choices.
Coming back to Rockwell's and Mactavish's attempt to describe multimedia, there are two ways we can think through it. The first is to think about multimedia through definitions, histories, examples, and theoretical problems. The second way is to use multimedia to think and to communicate thought. To think with multimedia is to use multimedia to explore ideas and to communicate them. In a field like multimedia, where what we think about is so new, it is important to think with.
I quite agree. As we have seen, analyzing and examining the theoretical foundations of new media (and other computer-based forms) together with their various characteristics (such as interactivity), throughout other courses, it is rather impossible to elaborate an all-encompassing definition. It is easier and way more exciting to think of it through personal ideas and user experiences. After all, the dynamic life of today's new media content and its interactive relationship with the media consumer, moves, breathes and flows with pulsing excitement in real time. Yet we try so hard to capture it in a countless number of definitons, one more lifeless than another. New media has become a true benefit to everyone because it allows people to express their artwork in more than one way with the technology that we have today and there is no longer a limit to what we can do with our creativity. So let us be creative.
Multimedia
Many names have emerged to describe computer-based forms, such as digital media, new media, hypermedia, or multimedia. The articles we had to read in week 44 looked at multimedia.
Rockwell and Mactavish combine several definitons with a focus on multimedia as a genre of communicative work and define multimedia work as a computer-based rhetorical artifact in which multiple media are integrated into an interactive whole. Multimedia works, whether born digital or remediated, share common characteristics including emerging modes of electronic production, distribution, and consumption. However, when Manovich describes new media (a term closely related to multimedia), he identifies it with the use of a computer for distribution and exhibition, rather than with production. To bring an example, photos which are put on a CD-ROM and require a computer to view them are considered new media, while the same photos printed as a book are not. Even though these photos might have been taken with a digital camera.
There are a number of ways to classify multimedia works. For example, we could classify them in terms of their perceived use, from entertainment to education. We could look at the means of distribution and the context of consumption of such works, from free websites that require a high-speed Internet connection, to expensive CD-ROM games that require the latest video cards to be playable. We could classify multimedia by the media combined, from remediated works that take a musical work and add synchronized textual commentary, to virtual spaces that are navigated. Other criteria for classification could be the technologies of production, the sensory modalities engaged, the type of organization that created the work, or the type of interactivity.
Packer and Jordan mention many of the same concepts (integration, interactivity, hypermedia, immersion, and narrativity) to determine the scope of multimedia's capabilities for expression. Integration can be seen as combining of artistic forms and technology into a hybrid form of expression, while hypertext is the linking of separate media elements into one another to create a trail of personal association. Interactivity is the ability of the user to directly manipulate and influence their experience of media, whereas immersion is the experience of entering into the simulation or suggestion of a three-dimesional (3-D) environment. And finally, narrativity can be seen as the aesthetic and formal strategies resulting in non-linear expressive forms.
One of the latest developments in multimedia systems has been the 3-D virtual space, a term that applies to computer-simulated environments that can simulate places in the real world, as well as in imaginary worlds. While virtual worlds are becoming increasingly popular with a growing number of users losing – or in some cases finding – themselves in the fantasy environments, Rockwell and Mactavish point out another direction in the industry. Looking at the developments of the past decades, they say: "The desktop multimedia systems of the 1990s are now being repackaged as portable devices that can play multiple media. The keyboard and the mouse are being replaced by input devices like pen interfaces on personal digital assistants (PDAs). Rather than immersing ourselves in virtual caves, we are bringing multimedia computing out of the office or lab and weaving it in our surroundings. The challenge to multimedia design is how to scale interfaces appropriately for hand-held devices like MP3 players and mobile phones." Both are true as multimedia is the answer to multiple choices.
Coming back to Rockwell's and Mactavish's attempt to describe multimedia, there are two ways we can think through it. The first is to think about multimedia through definitions, histories, examples, and theoretical problems. The second way is to use multimedia to think and to communicate thought. To think with multimedia is to use multimedia to explore ideas and to communicate them. In a field like multimedia, where what we think about is so new, it is important to think with.
I quite agree. As we have seen, analyzing and examining the theoretical foundations of new media (and other computer-based forms) together with their various characteristics (such as interactivity), throughout other courses, it is rather impossible to elaborate an all-encompassing definition. It is easier and way more exciting to think of it through personal ideas and user experiences. After all, the dynamic life of today's new media content and its interactive relationship with the media consumer, moves, breathes and flows with pulsing excitement in real time. Yet we try so hard to capture it in a countless number of definitons, one more lifeless than another. New media has become a true benefit to everyone because it allows people to express their artwork in more than one way with the technology that we have today and there is no longer a limit to what we can do with our creativity. So let us be creative.
Saturday, November 13, 2010
Task 8: From mass media to personal media
New Interactive Environments
Lüders, M. (2008). Conceptualizing personal media.
We have learnt that new media communication technologies enable and facilitate user-to-user interactivity and interactivity between user and information. Internet, the network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, connects billions of users worldwide and replaces the "one-to-many" model of traditional mass communication with the possibility of a "many-to-many" web of communication, where any individual with the appropriate technology can produce his or her online media.
The new media with technology convergence shifts the model of mass communication, and radically shapes the ways we interact and communicate with one another. The article by Marika Lüders describes how the digitalization and personal use of media technologies have destabilized the traditional dichotomization between mass communication and interpersonal communication, and therefore between mass media and personal media (e.g. mobile phones, email, instant messenger, blogs and photo-sharing services). It also aims to point out some social implications of the recent and ongoing development of digital personal media.
The rise of new media has increased communication between people all over the world and the Internet. With the development and appropriation of digital personal media, mediated social interaction has the potential to be near all-pervasive in our everyday life. It has allowed people to express themselves through blogs, websites, pictures, and other user-generated media. Lüders writes: "The combination of the Internet, PC and evolvement of less expensive and more manageable media production tools give leeway for the amateur media producer. Anyone becomes qualified to be a media producer and is likely to have an audience to their productions. Examples are ample. The successes of photo-sharing services such as Flickr and Deviantart are only two cases that consolidate the thesis of the amateur media producer." This means that individuals now have a means to exposure that is comparable in scale to that previously restricted to a select group of mass media producers. Individual users increasingly construct media messages, social discourses multiply and mass media institutions no longer reign as exclusive storytellers with audiences beyond immediate social and geographical borders.
At the same time, the importance of active and creative amateur users is stressed among key actors within the mass media industry, further complicating the distinction between personal media and mass media. In August 2006, CNN launched CNN Exchange (now CNN iReport), a service referred to as YouTube for news – you can submit your own newsworthy videos, audio clips and articles and perhaps see them on the site and TV. The problem, of course, is that CNN controls the whole experience. On the other hand, there are weblogs that, in recent years, have gained increasing notice and coverage for their role in breaking, shaping, and spinning news stories. Blogs often become more than a way to just communicate; they become a way to reflect on life, or works of art. Blogs have become the primary Internet medium for individual professional and non-professional self-expression. Few personal blogs rise to fame and the mainstream, but some personal blogs quickly garner an extensive following. Many bloggers, particularly those engaged in participatory journalism, differentiate themselves from the mainstream media, while others are members of that media working through a different channel. Some institutions see blogging as a means of "getting around the filter" and pushing messages directly to the public.
Therefore, as explained by Lüders, with the digitalization of media, in certain cases the same media technologies are used for both mass media and private individual purposes. For example, the Internet is the technological foundation of both commercial online magazines and personal homepages. Sharing the technologies, however, does not mean that distinctions between mass media and personal media are no longer pertinent. Personal media are distinguishable from mass media, if not always technically, then at least socially. Technologies in practice take on different meanings in different contexts. Technologies are more than their technical elements and media forms are more than their technology. Media forms are the result of the interrelations between media technologies and their function within our everyday lives. The Internet as technology constitutes various media forms, which then are characterized further by different genres. Blogs clearly fall into various genres: personal diaries, academic, research, travel, campaigning or food, among many others.
All this makes personally-mediated messages and user-generated content rather chaotic, compared to traditional mass media, and requires people to be multimodal-literate: handling a complex mix of audiovisual-textual media technologies, producing and deciphering meanings. Yet the advantages that the new kind of media has enabled for us as media producers and consumers are preponderate. Vin Crosbie, media industry consultant and professor of New Media at Syracuse University, describes in "What is New Media?" the new media as individuation media that has the advantages of both the interpersonal and the mass media, but without their complementary disadvantages.
Lüders, M. (2008). Conceptualizing personal media.
We have learnt that new media communication technologies enable and facilitate user-to-user interactivity and interactivity between user and information. Internet, the network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, connects billions of users worldwide and replaces the "one-to-many" model of traditional mass communication with the possibility of a "many-to-many" web of communication, where any individual with the appropriate technology can produce his or her online media.
The new media with technology convergence shifts the model of mass communication, and radically shapes the ways we interact and communicate with one another. The article by Marika Lüders describes how the digitalization and personal use of media technologies have destabilized the traditional dichotomization between mass communication and interpersonal communication, and therefore between mass media and personal media (e.g. mobile phones, email, instant messenger, blogs and photo-sharing services). It also aims to point out some social implications of the recent and ongoing development of digital personal media.
The rise of new media has increased communication between people all over the world and the Internet. With the development and appropriation of digital personal media, mediated social interaction has the potential to be near all-pervasive in our everyday life. It has allowed people to express themselves through blogs, websites, pictures, and other user-generated media. Lüders writes: "The combination of the Internet, PC and evolvement of less expensive and more manageable media production tools give leeway for the amateur media producer. Anyone becomes qualified to be a media producer and is likely to have an audience to their productions. Examples are ample. The successes of photo-sharing services such as Flickr and Deviantart are only two cases that consolidate the thesis of the amateur media producer." This means that individuals now have a means to exposure that is comparable in scale to that previously restricted to a select group of mass media producers. Individual users increasingly construct media messages, social discourses multiply and mass media institutions no longer reign as exclusive storytellers with audiences beyond immediate social and geographical borders.
At the same time, the importance of active and creative amateur users is stressed among key actors within the mass media industry, further complicating the distinction between personal media and mass media. In August 2006, CNN launched CNN Exchange (now CNN iReport), a service referred to as YouTube for news – you can submit your own newsworthy videos, audio clips and articles and perhaps see them on the site and TV. The problem, of course, is that CNN controls the whole experience. On the other hand, there are weblogs that, in recent years, have gained increasing notice and coverage for their role in breaking, shaping, and spinning news stories. Blogs often become more than a way to just communicate; they become a way to reflect on life, or works of art. Blogs have become the primary Internet medium for individual professional and non-professional self-expression. Few personal blogs rise to fame and the mainstream, but some personal blogs quickly garner an extensive following. Many bloggers, particularly those engaged in participatory journalism, differentiate themselves from the mainstream media, while others are members of that media working through a different channel. Some institutions see blogging as a means of "getting around the filter" and pushing messages directly to the public.
Therefore, as explained by Lüders, with the digitalization of media, in certain cases the same media technologies are used for both mass media and private individual purposes. For example, the Internet is the technological foundation of both commercial online magazines and personal homepages. Sharing the technologies, however, does not mean that distinctions between mass media and personal media are no longer pertinent. Personal media are distinguishable from mass media, if not always technically, then at least socially. Technologies in practice take on different meanings in different contexts. Technologies are more than their technical elements and media forms are more than their technology. Media forms are the result of the interrelations between media technologies and their function within our everyday lives. The Internet as technology constitutes various media forms, which then are characterized further by different genres. Blogs clearly fall into various genres: personal diaries, academic, research, travel, campaigning or food, among many others.
All this makes personally-mediated messages and user-generated content rather chaotic, compared to traditional mass media, and requires people to be multimodal-literate: handling a complex mix of audiovisual-textual media technologies, producing and deciphering meanings. Yet the advantages that the new kind of media has enabled for us as media producers and consumers are preponderate. Vin Crosbie, media industry consultant and professor of New Media at Syracuse University, describes in "What is New Media?" the new media as individuation media that has the advantages of both the interpersonal and the mass media, but without their complementary disadvantages.
No longer must anyone who wants to individually communicate a unique message to each recipient have to be restricted to communicating with only one person at a time. No longer must anyone who wants at once to communicate message to a mass of people be unable to individualize totally the content of that message for each recipient.As for the Lüders' article, the conclusion remained unclear to me. Does she consider the blurring borders between mass communication and interpersonal communication a problem or does she simply acknowledge the fact that it is happening – or has already happened? In one way or another, I tend to think that the mix is a benefit to both sides and both the interpersonal and the mass media can complement one another as seen in the arising new media.
Thursday, November 11, 2010
HTML5 Project Brings Tablet Reading Experience to Any Browser
With iPad apps popping out like mushrooms after the rain, a HTML5 project brings a similar tablet reading experience to desktop and mobile broswers.
The Center for Public Integrity has fashioned a template that mimics the in-app reading experience of newspapers – minus, of course, some of the interactive capabilities, such as swiping, offered by touchscreen devices, announces Lauren Indvik on Mashable.com.
One of the main advantages of the HTML5 template is that it is significantly cheaper to produce than a mobile app for a complex operating system like iOS or Android, meaning that more news organizations will be able to render digital, app-like experiences without hiring a developer. The format is also entirely mobile-friendly.
The template was created in conjunction with digital reading platform Treesaver. Treesaver divides content into pages, automatically adjusting the layout to the size of the screen. It works on any device that has a web browser: Desktop PC or Mac, notebook, netbook, iPad and iPhone. It's produced with web standards – HTML, CSS and JavaScript. You can embed video or Flash in it, just as you can with any web site. There is no app to download. No plug in to install.
Have a look...
The Center for Public Integrity has fashioned a template that mimics the in-app reading experience of newspapers – minus, of course, some of the interactive capabilities, such as swiping, offered by touchscreen devices, announces Lauren Indvik on Mashable.com.
One of the main advantages of the HTML5 template is that it is significantly cheaper to produce than a mobile app for a complex operating system like iOS or Android, meaning that more news organizations will be able to render digital, app-like experiences without hiring a developer. The format is also entirely mobile-friendly.
The template was created in conjunction with digital reading platform Treesaver. Treesaver divides content into pages, automatically adjusting the layout to the size of the screen. It works on any device that has a web browser: Desktop PC or Mac, notebook, netbook, iPad and iPhone. It's produced with web standards – HTML, CSS and JavaScript. You can embed video or Flash in it, just as you can with any web site. There is no app to download. No plug in to install.
Have a look...
Wednesday, November 10, 2010
Facebook stories
You spend half the day browsing through pictures of friends on Facebook. Change your status six times a day. And have a number of friends somewhere in the four digits. Congratulations, you're a Facebook addict.
Facebook, The Social Network, has changed and touched the life of millions. Facebook is all about the individual and collective experiences of you and your friends. It's filled with hundreds of millions of stories.
Which ones inspire you? What's your Facebook story? Is it a story about love, family, friendship, religion, health, education, university, travel, or other? Is it a story about happiness or grief?
Millions of stories by location and theme have been shared on stories.facebook.com.
Facebook has more than 500 million members, and more than half of them use Facebook every day, with growth accelerating thanks to mobile.
Week 8: Ethics and Law in New Media
November 8-14
Topic 15: The Proprietary World: The WIPO Intellectual Property model
Topic 16: More WIPO: Contracts and Licenses
Study the Anglo-American and Continental European school of IP. Write a short comparative analysis to your blog (if you have clear preference for one over another, explain that, too).
The recognition and protection of private property rights, including IP, largely depends on whether a nation is a common law or civil law jurisdiction, and also, on whether it has adopted a Continental or an Anglo-American form of capitalism as its national economic system.
Blue: Civil law. Red: Common law. Brown: Bijuridical (civil and common law).
Green: Customary law. Yellow: Fiqh.
It is often said that the common law is primarily concerned with the protection of economic rights, whereas countries that follow civil law tradition concentrate on the moral rights of the author and analogous creators. While there is doubtless a great deal of oversimplification in these views, and maybe even a measure of racial stereotyping, there is also more than a grain of truth. (David T. Keeling, Intellectual Property Rights in EU Law: Free Movement and Competition Law, pp. 263)
The prime inspiration for both systems, the Continental and the Anglo-American, is the cultural importance of authors, but the focus of the Continental European is even more the author while the Anglo-American is more focused on the commercial part of the copyright. Copyright protection is divided into two categories, economic rights and moral rights. Moral rights focus more on the author as a person while economic rights focus on the financial profit which can be made from a work.
Moral rights include the right of attribution, the right to have a work published anonymously or pseudonymously, and the right to the integrity of the work. The preserving of the integrity of the work bars the work from alteration, distortion, or mutilation. Moral rights are distinct from any economic rights tied to copyrights. Even if an artist has assigned his or her rights to a work to a third party, he or she still maintains the moral rights to the work. In most of Europe, it is not possible for authors to assign their moral rights (unlike the copyright itself, which is regarded as an item of property which can be sold, licensed, lent, mortgaged or given like any other property). They can agree not to enforce them and such terms are very common in contracts in Europe. Moral rights have had a less robust tradition within the Anglo-American school, where the protection for moral rights has, up until today, been very slim and is still not as strong as within the Continental European school.
The most important international convention which regulates copyright is the Berne Convention. The convention was a compromise between the Anglo-American approach and the Continental European approach. Article 6bis of the Berne Convention protects attribution and integrity, stating: Independent of the author's economic rights, and even after the transfer of the said rights, the author shall have the right to claim authorship of the work and to object to any distortion, mutilation or other modification of, or other derogatory action in relation to the said work, which would be prejudicial to the author's honor or reputation. However, when the United States signed the Berne Convention, it stipulated that the Convention's "moral rights" provisions were addressed sufficiently by other statutes, such as laws covering slander and libel.
Despite the laws being standardized to some extent, consistencies among different nations remain, each jurisdiction has separate and distinct laws and regulations about copyright. The World Intellectual Property Organization (WIPO) summarizes each of its member states' intellectual property laws on its website. WIPO is a specialized agency of the United Nations that was established by the WIPO Convention in 1967 with a mandate from its Member States to promote the protection of IP throughout the world through cooperation among states and in collaboration with other international organizations. Its headquarters are in Geneva, Switzerland.
See World Intellectual Property Organization - An Overview (2010 Edition) for more information.
WIPO launches WIPO Lex, a one-stop search facility for national laws and treaties on intellectual property of WIPO, WTO and UN Members.
Topic 15: The Proprietary World: The WIPO Intellectual Property model
Topic 16: More WIPO: Contracts and Licenses
Study the Anglo-American and Continental European school of IP. Write a short comparative analysis to your blog (if you have clear preference for one over another, explain that, too).
The recognition and protection of private property rights, including IP, largely depends on whether a nation is a common law or civil law jurisdiction, and also, on whether it has adopted a Continental or an Anglo-American form of capitalism as its national economic system.
Blue: Civil law. Red: Common law. Brown: Bijuridical (civil and common law).
Green: Customary law. Yellow: Fiqh.
It is often said that the common law is primarily concerned with the protection of economic rights, whereas countries that follow civil law tradition concentrate on the moral rights of the author and analogous creators. While there is doubtless a great deal of oversimplification in these views, and maybe even a measure of racial stereotyping, there is also more than a grain of truth. (David T. Keeling, Intellectual Property Rights in EU Law: Free Movement and Competition Law, pp. 263)
The prime inspiration for both systems, the Continental and the Anglo-American, is the cultural importance of authors, but the focus of the Continental European is even more the author while the Anglo-American is more focused on the commercial part of the copyright. Copyright protection is divided into two categories, economic rights and moral rights. Moral rights focus more on the author as a person while economic rights focus on the financial profit which can be made from a work.
Moral rights include the right of attribution, the right to have a work published anonymously or pseudonymously, and the right to the integrity of the work. The preserving of the integrity of the work bars the work from alteration, distortion, or mutilation. Moral rights are distinct from any economic rights tied to copyrights. Even if an artist has assigned his or her rights to a work to a third party, he or she still maintains the moral rights to the work. In most of Europe, it is not possible for authors to assign their moral rights (unlike the copyright itself, which is regarded as an item of property which can be sold, licensed, lent, mortgaged or given like any other property). They can agree not to enforce them and such terms are very common in contracts in Europe. Moral rights have had a less robust tradition within the Anglo-American school, where the protection for moral rights has, up until today, been very slim and is still not as strong as within the Continental European school.
The most important international convention which regulates copyright is the Berne Convention. The convention was a compromise between the Anglo-American approach and the Continental European approach. Article 6bis of the Berne Convention protects attribution and integrity, stating: Independent of the author's economic rights, and even after the transfer of the said rights, the author shall have the right to claim authorship of the work and to object to any distortion, mutilation or other modification of, or other derogatory action in relation to the said work, which would be prejudicial to the author's honor or reputation. However, when the United States signed the Berne Convention, it stipulated that the Convention's "moral rights" provisions were addressed sufficiently by other statutes, such as laws covering slander and libel.
Despite the laws being standardized to some extent, consistencies among different nations remain, each jurisdiction has separate and distinct laws and regulations about copyright. The World Intellectual Property Organization (WIPO) summarizes each of its member states' intellectual property laws on its website. WIPO is a specialized agency of the United Nations that was established by the WIPO Convention in 1967 with a mandate from its Member States to promote the protection of IP throughout the world through cooperation among states and in collaboration with other international organizations. Its headquarters are in Geneva, Switzerland.
See World Intellectual Property Organization - An Overview (2010 Edition) for more information.
WIPO launches WIPO Lex, a one-stop search facility for national laws and treaties on intellectual property of WIPO, WTO and UN Members.
Sunday, November 7, 2010
Task 7: In search for my own understanding of interactivity
The two articles on interactivity previously studied, „Interactivity: Tracking a New Concept in Media and Communication Studies“ by J. F. Jensen and „Interactivity: A Concept Explication“ by S. K. Kiousis, both confirmed that the concept of interactivity is certainly multi-discursive and thus depends to a very large extent on the context in which it is used for the meaning to be clear. This makes the search for our own understanding of interactivity a great quest to take on where no interpretation can be wrong.
From the academic perspective, we may analyze the term interactivity referring, first and foremost, to the fields of sociology, communication studies and informatics (including information and computer science), but when we think of interactivity, in search for our own understanding of one of the media community’s most used buzzwords, we often perceive it best in the context of personal experience. As the second half of the definiton by Kiousis explicates, „With regard to human users, it [interactivity] additionally refers to their ability to perceive the experience as a simulation of interpersonal communication and increase their awareness of telepresence.“ Moreover, most interactive computing systems are for some human purpose and interact with humans in human contexts.
Interactivity is a central concept in new media. In simple words, interactive new media holds out a possibility of on-demand access to content any time, anywhere, on any digital device, as well as interactive user feedback, creative participation and community formation around the media content. It also breaks the connection between physical place and social place, making physical location much less significant for our social relationships, and has the ability to connect like-minded others worldwide. Social interaction, using web-based technologies to turn communication into interactive dialogues, has become part of our personal and professional life. Popular networking sites such as Facebook and Twitter as well as personal weblogs are commonly used for socialization and connecting with friends, relatives, and employees, wherever in the world they may currently be.
Interactivity, the way I feel and perceive it, even though closely tied to technology, is a measure of a media’s potential ability to help the user connect, interplay, socialize, retrieve, personalize and exchange information, give feedback, form and participate in communities, or in other words, a means that helps to keep the interpersonal communication alive at any time, anywhere, with anyone we choose to. This communication is hoped to be a mutual (reciprocal) lively action, just like the words ’inter’ and ’activity’ presume and just like we expect communication to always be. True, we can also communicate and interact with a computer software, Internet website, or whatever other artifact, but I find the social connotations more cherished.
To answer the question whether interactivity as a concept has changed in the past 10, 100 or 1000 years, I believe the heart of it has remained the same. What changes constantly is the technology we employ to interact and the degree to which this technology makes the interaction possible. Patently, the level of interactivity varies within different media and is ever higher as more advanced technologies are introduced. Theoretical and operational definitons are useful to understand the background behind the concept, however, interactivity must be "touched and felt" (perceived) as part of our life to mean more than a concept explication to us.
According to J. D. Peters, author of the prophetic book called “Speaking into the Air: A History of the Idea of Communication”, the ideal of interactivity, the search for instantaneous contact with others, has a long and fraught history in western culture. He traced it back to St Augustine, for whom the epitome of perfect communication was the angel, a word derived from the Greek for “messenger”. Coming back to the 21st century, Jensen writes, “The culture has lived out what we might call an interactive turn“. Because interactivity as we know it today is so closely tied to technology, many of the explications – including the one elaborated by Kiousis – make mediated communication via technology a central attribute defining interactivity and exclude pure interpersonal communication.
Or as Peters argues, the aim of modern media has been to “mimic the angels by mechanical or electronic means”. To illustrate the situation, I would like to share a video recently reviewed by one of my fellow students on his blog. The video that brought goosebumps to me, reminds us that interpersonal communication, even though often substituted – or, as it would be more correct to say, mediated – by the various means of interactive computer technology, continues to play an important role in our life. |
From the academic perspective, we may analyze the term interactivity referring, first and foremost, to the fields of sociology, communication studies and informatics (including information and computer science), but when we think of interactivity, in search for our own understanding of one of the media community’s most used buzzwords, we often perceive it best in the context of personal experience. As the second half of the definiton by Kiousis explicates, „With regard to human users, it [interactivity] additionally refers to their ability to perceive the experience as a simulation of interpersonal communication and increase their awareness of telepresence.“ Moreover, most interactive computing systems are for some human purpose and interact with humans in human contexts.
Interactivity is a central concept in new media. In simple words, interactive new media holds out a possibility of on-demand access to content any time, anywhere, on any digital device, as well as interactive user feedback, creative participation and community formation around the media content. It also breaks the connection between physical place and social place, making physical location much less significant for our social relationships, and has the ability to connect like-minded others worldwide. Social interaction, using web-based technologies to turn communication into interactive dialogues, has become part of our personal and professional life. Popular networking sites such as Facebook and Twitter as well as personal weblogs are commonly used for socialization and connecting with friends, relatives, and employees, wherever in the world they may currently be.
Interactivity, the way I feel and perceive it, even though closely tied to technology, is a measure of a media’s potential ability to help the user connect, interplay, socialize, retrieve, personalize and exchange information, give feedback, form and participate in communities, or in other words, a means that helps to keep the interpersonal communication alive at any time, anywhere, with anyone we choose to. This communication is hoped to be a mutual (reciprocal) lively action, just like the words ’inter’ and ’activity’ presume and just like we expect communication to always be. True, we can also communicate and interact with a computer software, Internet website, or whatever other artifact, but I find the social connotations more cherished.
To answer the question whether interactivity as a concept has changed in the past 10, 100 or 1000 years, I believe the heart of it has remained the same. What changes constantly is the technology we employ to interact and the degree to which this technology makes the interaction possible. Patently, the level of interactivity varies within different media and is ever higher as more advanced technologies are introduced. Theoretical and operational definitons are useful to understand the background behind the concept, however, interactivity must be "touched and felt" (perceived) as part of our life to mean more than a concept explication to us.
Tuesday, November 2, 2010
Week 7: Ethics and Law in New Media
November 1-7
Topic 13: The Author vs the Information Society
Read Chapter 3 "Against Intellectual Property" of the Brian Martin's book. Write a blog review (especially, comment on his strategies for change).
The classic argument for copyright is the view that granting developers temporary monopolies over their works encourages further development and creativity by giving the developer a source of income. A central anti-copyright argument is that copyright has never been of net benefit to society and instead serves to enrich a few at the expense of creativity.
Or as Brian Martin puts it: "Intellectual work is inevitably a collective process. No one has totally original ideas: ideas are always built on the earlier contributions of others. Intellectual property is theft, sometimes in part from an individual creator but always from society as a whole."
The alternative to intellectual property is straightforward: intellectual products should not be owned, as in the case of everyday language. That means not owned by individuals, corporations, governments, or the community as common property. It means that ideas are available to be used by anyone who wants to. Strategies against intellectual property include civil disobedience, promotion of non-owned information, and fostering of a more cooperative society. Challenging intellectual property must involve the development of methods to support creative individuals.
Strategies for change
Analyzing the proposed stategies for change, what seems right and what seems wrong?
Pros
As someone who has published a book, I'm personally against the strategy of reproducing protected works. Any illicit violation should not be tolerated, which doesn't mean that the authors of some creative work wouldn't be willing to share their intellectual property if kindly asked. We cannot expect people whose only income relies on books they write, photographs they take or programs they create to always share their work without any cost, or to gain satisfaction only out of providing a free service to others, which other times might freely be the case. Sometimes we do produce works purely for personal satisfaction, or even for respect and recognition from peers. Other times, we kindly share part of our creative work for free, and by doing that, still contribute to the humanly need to share or, from a cultural perspective, to mash-up culture and knowledge in search of enrichment. An example of how partial sharing might be useful is the business model of Google Books, which displays millions of pages of copyrighted and uncopyrighted books as part of a business plan drawing its revenue from advertising. At the same time, Google Books blocks-out large sections of those same books, which incentivizes purchases, and supports the legitimate interests of rights holders.
Copyright is meant to protect creators. New innovations are often both creatively and expensive endeavors. The creation of copyright laws has protected innovators from investing huge amounts of time and money into a project, only to have it stolen. The establishment of the copyright laws has also led to more creators documenting their innovations. Prior to copyright laws, individuals were extra secretive, sometimes choosing not to document the innovation for fear of the idea being stolen.
Cons
While offering advantages, such as protecting creators, copyright laws also have some disadvantages, like creating monopolies. The right to monopolize the sale of the product or its reproduction puts a lot of power in the hands of one person or company. Monopolies over items, like prescription drugs, means that companies can charge any amount they desire, making the medicine too expensive for lower socioeconomic families or individuals to afford. Unfortunately, copyright laws most often benefit large corporations and businesses, rather than individuals. Instead of helping the public with innovations, they become a costly burden that can only be accessed by the wealthy. Companies also rely on outdated patents to generate income rather than creating new, more efficient innovations. This is where the mass civil disobedience to intellectual property laws might help. By cooperatively and solidly showing the discontent, there is indeed a greate chance of focussing attention on the issues at stake and creating a dialogue with the copyright holders.
In the context of the Internet and Web 2.0 it is quite obvious that copyright law needs to be adapted to modern information technology. Copyright has become obsolete with regards to the Internet, the cost of trying to enforce it is unreasonable, and instead business models need to adapt to the reality of the darknet (a phrase used to refer collectively to all covert communication networks). Many citizens of the Internet want to share their work - and the power to reuse, modify, and distribute their work - with others on generous terms. This is particularly so in the context of Web 2.0 and the increase in user generated content. However, it is also true that many of the Web 2.0 users often do not realise that they are inadvertently engaging in copyright infringements (the most common case, quite actual for us as new media students, is blogging and the associated passing around of articles and images). Yet, we all have become to love the free and open source software and want the collaborative creation to continue, which programs us to support the strategy of promoting non-owned information.
To conclude
In one way or another, if intellectual property is to be challenged, people need to be reassured that misappropriation of ideas will not become a big problem. Creative society still needs some sort of laws, but perhaps they should be less constricting, such as the above mentioned shareright and copyleft alternatives. Creative Commons, a non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share, also states that it is not anti-copyright per se, but argues for copyright to be managed in a more flexible and open way.
Current copyright system needs to be brought into line with reality and the needs of society. Hipatia argues that this would "provide the ethical principles which allow the individual to spread his/her knowledge, to help him/herself, to help his/her community and the whole world, with the aim of making society ever more free, more equal, more sustainable, and with greater solidarity." As pointed out by Martin in the article, "in a society with less hierarchy and greater equality, intrinsic motivation and satisfaction would be the main returns from contributing to intellectual developments."
Cory Doctorow and Boing Boing
Cory Doctorow, a Canadian blogger, journalist and science fiction author as well as an activist in favour of liberalising copyright laws and a proponent of the Creative Commons organisation, believes that copyright laws should be liberalized to allow for free sharing of all digital media. He has also advocated filesharing. He argues that copyright holders should have a monopoly on selling their own digital media, and copyright laws should only come into play when someone attempts to sell a product currently under someone else's copyright. Doctorow is an opponent of digital rights management (DRM), claiming that it limits the free sharing of digital media and frequently causes problems for legitimate users (including registration problems that lock users out of their own purchases and prevent them from being able to move their media to other devices and platforms).
Boing Boing is a publishing entity, first established as a magazine, later becoming a group blog (co-edited by Cory Doctorow). Boing Boing became a website in 1995 and later relaunched as a weblog on January 21, 2000, described as a "directory of wonderful things." The site's own original content is licensed under a Creative Commons Attribution Non-Commercial license, as of August 2008.
Topic 14: The History and Development of Copyright
No task here this week.
Topic 13: The Author vs the Information Society
Read Chapter 3 "Against Intellectual Property" of the Brian Martin's book. Write a blog review (especially, comment on his strategies for change).
The classic argument for copyright is the view that granting developers temporary monopolies over their works encourages further development and creativity by giving the developer a source of income. A central anti-copyright argument is that copyright has never been of net benefit to society and instead serves to enrich a few at the expense of creativity.
Or as Brian Martin puts it: "Intellectual work is inevitably a collective process. No one has totally original ideas: ideas are always built on the earlier contributions of others. Intellectual property is theft, sometimes in part from an individual creator but always from society as a whole."
The alternative to intellectual property is straightforward: intellectual products should not be owned, as in the case of everyday language. That means not owned by individuals, corporations, governments, or the community as common property. It means that ideas are available to be used by anyone who wants to. Strategies against intellectual property include civil disobedience, promotion of non-owned information, and fostering of a more cooperative society. Challenging intellectual property must involve the development of methods to support creative individuals.
Strategies for change
- Change thinking. Rather than talk of intellectual property in terms of property and trade, it should be talked about in terms of free speech and its impediments. Once intellectual property is undermined in the minds of many citizens, it will become far easier to topple its institutional supports.
- Expose the costs. It can cost a lot to set up and operate a system of intellectual property. For instance, a middle-ranking country from the First World, such as Australia, pays far more for intellectual property - mostly to the US - than it receives. Once the figures are available and understood, this will aid in reducing the legitimacy of the world intellectual property system.
- Reproduce protected works. From the point of view of intellectual property, this is called "piracy". This happens every day when people photocopy copyrighted articles, tape copyrighted music, or duplicate copyrighted software. Unfortunately, illegal copying is not a very good strategy against intellectual property, any more than stealing goods is a way to challenge ownership of physical property.
- Openly refuse to cooperate with intellectual property. This is far more powerful than illicit copying. The methods of nonviolent action can be used here, including noncooperation, boycotts and setting up alternative institutions. By being open about the challenge, there is a much greater chance of focussing attention on the issues at stake and creating a dialogue. Once mass civil disobedience to intellectual property laws occurs, it will be impossible to stop.
- Promote non-owned information. A good example is public domain software, which is computer software that is made available free to anyone who wants it. A suitable alternative to copyright is shareright. A piece of freeware might be accompanied by the notice, "You may reproduce this material if your recipients may also reproduce it." Another approach, called copyleft, requires those who pass on a free program to include the rights to use, modify, and redistribute the code; the code and the freedoms become legally inseparable. The developers of "freeware" gain satisfaction out of their intellectual work and out of providing a service to others.
- Develop principles to deal with credit for intellectual work. In a more cooperative society, credit for ideas would not be such a contentious matter. In a society with less hierarchy and greater equality, intrinsic motivation and satisfaction would be the main returns from contributing to intellectual developments. The less there is to gain from credit for ideas, the more likely people are to share ideas rather than worry about who deserves credit for them. Nonetheless, principles to deal with credit for intellectual work remain important even if credit is not rewarded financially. This includes guidelines for not misrepresenting another person's work.
Analyzing the proposed stategies for change, what seems right and what seems wrong?
Pros
As someone who has published a book, I'm personally against the strategy of reproducing protected works. Any illicit violation should not be tolerated, which doesn't mean that the authors of some creative work wouldn't be willing to share their intellectual property if kindly asked. We cannot expect people whose only income relies on books they write, photographs they take or programs they create to always share their work without any cost, or to gain satisfaction only out of providing a free service to others, which other times might freely be the case. Sometimes we do produce works purely for personal satisfaction, or even for respect and recognition from peers. Other times, we kindly share part of our creative work for free, and by doing that, still contribute to the humanly need to share or, from a cultural perspective, to mash-up culture and knowledge in search of enrichment. An example of how partial sharing might be useful is the business model of Google Books, which displays millions of pages of copyrighted and uncopyrighted books as part of a business plan drawing its revenue from advertising. At the same time, Google Books blocks-out large sections of those same books, which incentivizes purchases, and supports the legitimate interests of rights holders.
Copyright is meant to protect creators. New innovations are often both creatively and expensive endeavors. The creation of copyright laws has protected innovators from investing huge amounts of time and money into a project, only to have it stolen. The establishment of the copyright laws has also led to more creators documenting their innovations. Prior to copyright laws, individuals were extra secretive, sometimes choosing not to document the innovation for fear of the idea being stolen.
Cons
While offering advantages, such as protecting creators, copyright laws also have some disadvantages, like creating monopolies. The right to monopolize the sale of the product or its reproduction puts a lot of power in the hands of one person or company. Monopolies over items, like prescription drugs, means that companies can charge any amount they desire, making the medicine too expensive for lower socioeconomic families or individuals to afford. Unfortunately, copyright laws most often benefit large corporations and businesses, rather than individuals. Instead of helping the public with innovations, they become a costly burden that can only be accessed by the wealthy. Companies also rely on outdated patents to generate income rather than creating new, more efficient innovations. This is where the mass civil disobedience to intellectual property laws might help. By cooperatively and solidly showing the discontent, there is indeed a greate chance of focussing attention on the issues at stake and creating a dialogue with the copyright holders.
In the context of the Internet and Web 2.0 it is quite obvious that copyright law needs to be adapted to modern information technology. Copyright has become obsolete with regards to the Internet, the cost of trying to enforce it is unreasonable, and instead business models need to adapt to the reality of the darknet (a phrase used to refer collectively to all covert communication networks). Many citizens of the Internet want to share their work - and the power to reuse, modify, and distribute their work - with others on generous terms. This is particularly so in the context of Web 2.0 and the increase in user generated content. However, it is also true that many of the Web 2.0 users often do not realise that they are inadvertently engaging in copyright infringements (the most common case, quite actual for us as new media students, is blogging and the associated passing around of articles and images). Yet, we all have become to love the free and open source software and want the collaborative creation to continue, which programs us to support the strategy of promoting non-owned information.
To conclude
In one way or another, if intellectual property is to be challenged, people need to be reassured that misappropriation of ideas will not become a big problem. Creative society still needs some sort of laws, but perhaps they should be less constricting, such as the above mentioned shareright and copyleft alternatives. Creative Commons, a non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share, also states that it is not anti-copyright per se, but argues for copyright to be managed in a more flexible and open way.
Current copyright system needs to be brought into line with reality and the needs of society. Hipatia argues that this would "provide the ethical principles which allow the individual to spread his/her knowledge, to help him/herself, to help his/her community and the whole world, with the aim of making society ever more free, more equal, more sustainable, and with greater solidarity." As pointed out by Martin in the article, "in a society with less hierarchy and greater equality, intrinsic motivation and satisfaction would be the main returns from contributing to intellectual developments."
Cory Doctorow and Boing Boing
Cory Doctorow, a Canadian blogger, journalist and science fiction author as well as an activist in favour of liberalising copyright laws and a proponent of the Creative Commons organisation, believes that copyright laws should be liberalized to allow for free sharing of all digital media. He has also advocated filesharing. He argues that copyright holders should have a monopoly on selling their own digital media, and copyright laws should only come into play when someone attempts to sell a product currently under someone else's copyright. Doctorow is an opponent of digital rights management (DRM), claiming that it limits the free sharing of digital media and frequently causes problems for legitimate users (including registration problems that lock users out of their own purchases and prevent them from being able to move their media to other devices and platforms).
Boing Boing is a publishing entity, first established as a magazine, later becoming a group blog (co-edited by Cory Doctorow). Boing Boing became a website in 1995 and later relaunched as a weblog on January 21, 2000, described as a "directory of wonderful things." The site's own original content is licensed under a Creative Commons Attribution Non-Commercial license, as of August 2008.
Topic 14: The History and Development of Copyright
No task here this week.
Subscribe to:
Posts (Atom)