Archive for the 'Technology' Category

The end of the university? Not likely

Posted by Eric on September 23rd, 2013

This article was first published in University World News.

This year has frequently seen apocalyptic headlines about the end of the university as we know it. Three main drivers have been and still are fuelling these predictions: the worldwide massification of higher education; the increasing use of information and communication technology in teaching and the delivery of education; and the ongoing globalisation of higher education. These developments will make the traditional university obsolete in 2038. At least, that’s what some want us to believe.

The massification of higher education worldwide – even more than the massification in Western Europe, the United States and Japan in the post-war period – demands new and more efficient types of delivery. The acceleration in the demand for higher education, especially in China and other parts of South and East Asia, has made it nearly impossible for governments to respond to this demand. The increase in demand, together with decreased funding due to the financial crisis, has put pressure on traditional modes of university education.

Innovations in ICT have expanded the possibilities for delivering education and have led to new teaching instruments. The advent of massive open online courses, or MOOCs, in 2012 combined new technologies in order to reach a massive audience. These developments are intensified through the ongoing globalisation of higher education.

Because of the globalisation process, opportunities with regard to where to study have increased, ranging from attending universities abroad to attending online courses.

The concept of ‘the university’ is gone

The conjunction of these developments has led many to believe that the centuries-old model of the contemporary university is coming to an end. If we believe them, the higher education landscape of 2038 will be completely different from the current one. I would argue that these predictions show both a lack of knowledge about the contemporary landscape of higher education and a lack of historical understanding of the development of universities.

The time when the concept of the ‘university’ was clear-cut, referring to a single organisational and educational model, has long been gone. Especially since the massification of higher education in the post-war period, this single model has been accompanied by a wide variety of other higher education institutions. More vocationally oriented institutions were established, such as community colleges. Very large distance-education institutions emerged in many Western countries and beyond. What’s more, when the organisational boundaries of the traditional university were reached, new activities and new organisations appeared. One thing is for sure: in not one country in the world is the traditional university model representative of the entire higher education system any more.

But even if the proclaimers of the end of the university are only referring to the traditional model (whatever that is), they will be proven wrong in 2038, and long after that. The traditional university has been one of the most enduring institutions in the modern world. Granted, university research and university teaching have adapted constantly to changes in the economy and society. This process of adaptation might be too slow, according to many, but it is a constant process in the university. Despite this continual change and adaptation, the model of the university as we know it has changed very little.

The organisation of faculties, schools and departments around disciplines, accountability in the form of peer review, comparable tenure and promotion systems, the connection between education and research, the responsibility of academic staff in both education and research and both graduate and undergraduate education, the primacy of face-to-face instruction etc – these are all characteristics that can be found in universities throughout the world and which have existed for many, many decades – if not centuries.

My bet is they will still be there in 2038. It would be rather naive to think that a financial crisis or even a new type of delivery, like MOOCs, will profoundly change these enduring structures and beliefs.

Universities’ DNA

In the words of Clayton Christensen and Henry Eyring, authors ofThe Innovative University, we are talking about the ‘DNA’ of the university, and saying that this does not change easily. They argue that university DNA is not only similar across institutions but is also highly stable, having evolved over hundreds of years. Replication of that DNA occurs continually, as each retiring employee or graduating student is replaced by someone screened against the same criteria applied to his or her predecessor. The way things are done is determined not by individual preference but by institutional procedure, written into the ‘genetic code’.

New technologies will enable new forms of education and delivery. In the coming 25 years, we will see the emergence of new institutions focusing on specific target groups and we will witness traditional institutions employing these new technologies. But will this make the university as we know it obsolete? No, it will not, because the function of the university as we know it is much more comprehensive than ‘just’ the production and transfer of knowledge.

Students attend universities not simply to ‘consume’ knowledge in the form of a collection of courses. They go there for an academic experience; and they go there for a degree that will provide them with an entry ticket to the labour market and which will give them a certain status. Does the fact that I do not see any substantial changes in 2038 mean that there should be none? The fact that structures and beliefs endure does not always mean they still serve the functions they used to.

This is also the case with many of the traditional structures and beliefs in the university. Holding on to these practices is not an end in itself. At least, it should not be, yet in making policy and in making predictions, it is good to take into account the stabilising character of these structures and beliefs.

25 years from now

Because of the university DNA, there is rarely a revolution of the type so frequently predicted by politics, business and consultants. In addition to the major source of universities’ value to a fickle, fad-prone society, the university’s steadiness is also why one cannot make it more responsive to modern economic and social realities merely by regulating its behaviour. A university cannot be made more efficient by simply cutting its operating budget, nor can universities be made by legislative fiat to perform functions for which they are not expressly designed. Another argument why the university as we know it will still be there in 2038!

Many say that the best way to predict the situation in 25 years is to look back 25 years and see what has changed since then. I was first introduced to university life 25 years ago, in what you could call a traditional university. In the past 25 years I have studied and worked at four universities in and outside The Netherlands. At the time of writing, I work at Leiden University, another traditional university.

Comparing the university of 1988 with the university of 2013, it is remarkable how little these organisations have changed. Of course, the university has adapted to societal, political and economic changes, but at its core the traditional university has remained very much the same. I can safely say that the DNA of the traditional university has not changed in the past 25 years and I can safely predict that it will not change in the coming 25 years. And essentially, that is a good thing.

The Principle of Open Access

Posted by Eric on January 13th, 2009

I’m reading ‘The Access Principle’ by John Willinsky, a Canadian scholar now at the Stanford University School of Education. He is also the driving force behind the Public Knowledge Project, dedicated to improving the scholarly and public quality of research. I heard about his book some time ago when developing an interest in the open access movement (especially in relation to research in developing countries). But I got really interested after reading the intro to this book review by Scott Aaronson:

I have an ingenious idea for a company. My company will be in the business of selling computer games. But, unlike other computer game companies, mine will never have to hire a single programmer, game designer, or graphic artist. Instead I’ll simply find people who know how to make games, and ask them to donate their games to me. Naturally, anyone generous enough to donate a game will immediately relinquish all further rights to it. From then on, I alone will be the copyright-holder, distributor, and collector of royalties. This is not to say, however, that I’ll provide no “value-added.” My company will be the one that packages the games in 25-cent cardboard boxes, then resells the boxes for up to $300 apiece.

But why would developers donate their games to me? Because they’ll need my seal of approval. I’ll convince developers that, if a game isn’t distributed by my company, then the game doesn’t “count” — indeed, barely even exists — and all their labor on it has been in vain.

Admittedly, for the scheme to work, my seal of approval will have to mean something. So before putting it on a game, I’ll first send the game out to a team of experts who will test it, debug it, and recommend changes. But will I pay the experts for that service? Not at all: as the final cherry atop my chutzpah sundae, I’ll tell the experts that it’s their professional duty to evaluate, test, and debug my games for free!

On reflection, perhaps no game developer would be gullible enough to fall for my scheme. I need a community that has a higher tolerance for the ridiculous — a community that, even after my operation is unmasked, will study it and hold meetings, but not “rush to judgment” by dissociating itself from me. But who on Earth could possibly be so paralyzed by indecision, so averse to change, so immune to common sense?

I’ve got it: academics!

This was just the hilarious but oh so true intro to the actual review. Read the rest here. Or order Willinsky’s book here. And of course you can also download his book for free right here.

European Institute of Innovation and Technology: Go!

Posted by Eric on September 15th, 2008

Excellence needs flagships! That is why Europe must have a strong European Institute of Technology, bringing together the best brains and companies and disseminating the results throughout Europe. That is how José Manuel Durão Barosso introduced the European Institute of Technology about two and a half years ago. Today was the inaugural meeting of the first Governing Board of the EIT.

The Board’s 18 high-level members, coming from the worlds of business, higher education and research all have a track record in top-level innovation and are fully independent in their decision-making. The Board will be responsible for steering the EIT’s strategic orientation and for the selection, monitoring and evaluation of the Knowledge and Innovation Communities (KICs).

After discussions on whether the European version of MIT would become a virtual institute, a brick and mortar institution or something in between… After a study claimed that a European Insitute of Technology was actually not necessary… After feasibility studies had been neglected….

After the decision for the establishment of the EIT was formally taken and published in the Official Journal of the European Union in April earlier this year… After its name was changed into European Institute of Innovation and Technology… After beautiful Budapest won the race and became the official location of the EIT eitin June… And after the EIT’s first Governing Board was officially appointed on 30th July 2008…

It is now time to get to work!

The only thing still missing is a real logo. As long as there is none, I’ll just keep on using the one I have been using for the last years. Looks familiar, doesn’t it?

Academic Networking

Posted by Eric on July 12th, 2008

Social networking has gone academic. The Web 2.0 principles were already introduced in the field of science and innovation by the iBridge Network. Facebook brought social networking to the university, but it’s main goal was not exactly academic in nature. LinkedIn brought social networking to the professional sphere. Recently there have been some initiatives that bring social networking to academic life: Researchgate and Graduate Junction.graduatejunction

The Graduate Junction was established by Daniel Colegate and Esther Dingley, graduate  students in respectively Chemistry and Education at the University of Durham, in the United Kingdom. They set up The Graduate Junction because they were – in their own words – frustrated by a feeling of isolation in their own research projects and wanted to know who, if anyone, was doing similar research. I have had a quick look at it and it looks good and has the potential to be a valuable tool for graduate students. Much of its success obviously depends on the number of participants it will attract. If I still were a student I would definitely sign up and become member of groups like this.

researchgate Researchgate targets a larger community. It is meant as a networking tool for all academics and researchers. It is set up by three students from Germany (one of them now being at Harvard). Two of them in Medicine, one in Computer Science. The concept is backed by a world wide network of experts and advisers. Researchgate has big aspirations. Next to a networking tool, it sees itself as the start of a more profound change where researchers take more and more control over their publications and research findings.

So where will all this lead? Well…my experiences with these new tools for – often conservative – academics have not always been positive. Nevertheless I’m positive about these new tools. Graduate Junction has the advantage that it targets a younger group of people and probably more open to these kind of innovations. In addition, I think that the need of these tools might be more substantial with graduate students than with researchers in general. This is simply because the ‘normal’ channels such as journals and conferences are not so readily available to them and don’t provide that many opportunities for direct interaction.

Researchgate on the other hand has a more professional look and already is backed by a large network of academics. It also seems to provide more advanced technological opportunities like importing endnote libraries and linking with databases such as PubMed. I would love to see a further expansion to enable more interaction and maybe new opportunities for open peer reviewing.

I hope both initiatives will succeed. It’s about time for the academic community to start using the technological opportunities available. Both might turn out to be great new opportunities for inter-organisational, interdisciplinary and international cooperation.

Intellectual Property Infringement?

Posted by Eric on February 9th, 2008

Here’s a case to watch. The University of Wisconsin in Madison is accusing processor giant Intel of stealing their intellectual property. A lawsuit has been filed by UW’s technology transfer office (WARF, Wisconsin Alumni Research Foundation) in which it charges Intel with infringement of one of its patents. The patented invention improves the efficiency and speed of computer processing and this technology is used by Intel in its Intel Core 2 Duo processor.

WARF filed this complaint to ensure that the interests of the UW-Madison and its inventors are protected and that WARF receives the compensation to which it is entitled for Intel’s unlicensed use of the invention. This compensation will be used to advance continued research at the university. The foundation’s complaint identifies the Intel CoreTM 2 Duo microarchitecture as infringing WARF’s United States Patent No. 5,781,752, entitled “Table Based Data Speculation Circuit for Parallel Processing Computer.”

The technology, patented in 1998, was developed by four researchers at the UW-Madison, including Professor Gurindar Sohi, currently the chair of the university’s Computer Science Department. Intel has aggressively marketed the benefits of this invention as a feature of its Core 2 technology. “The technology significantly enhances opportunities for instruction level parallelism in modern processors, thereby increasing their execution speed,” states Michael Falk, WARF general counsel.

The researchers had several discussions with Intel representatives on the possibility of licensing the technology. Intel repeatedly refused but nevertheless incorporated it into its products. Intel never informed the researchers that it was using the patented technology. WARF is now asking the court to declare that Intel is infringing on its patent and to stop Intel core2duofrom selling the product. Also they asked for Intel to cover WARF’s legal fees and pay damages to WARF. Considering Intel’s dominant position in this market and the huge success of the Core 2 Duo, this last thing might prove very lucrative for the University of Wisconsin.

If it can be conclusively proven that Intel is using this specific technology, I guess that Intel will soon get together with WARF to come to a settlement…

Machines I want

Posted by Eric on January 31st, 2008

Now, isn’t this frustrating. After a hard day’s work, putting all effort in converting my thoughts to text, I read this: Philip M Parker is the world’s fastest book author, and given that he has been at it only for about five years and already has more than 85,000 books to his name, he is also probably the most prolific. Parker himself says the total is well over 200,000.


So how does Philip M Parker (professor of innovation, business and society at Insead in France) do all that? When he turns to a new subject, he seizes and shakes it till several books, or several hundred, emerge. Parker invented a machine that writes books. He says it takes about 20 minutes to write one. I don’t know what kind of device this is, but I am sure I want one! Beats an iPod, Kindle or a Mac Air anytime. Next week, the Education Guardian Weekly will have a closer look at the machine…

Update: here is how it works and here’s a video

EIT and Policy Research

Posted by Eric on May 6th, 2007

A few weeks ago, I discussed a study of Luc Soete and Peter Tindemans on the feasibility of the European Institute of Technology. On the basis of a comprehensive analysis, they concluded that the decentralized EIT that has been proposed by the Commission was not feasible. It is too dispersed; it would not increase significantly the research output in a field; it cannot match a top tier university in providing an environment for training graduates; and a dispersed institute cannot adequately organize technology transfer. As an alternative, they suggested a clustered model for an EIT. Food for thought, you would think…

In the last weekend of April, EU competitiveness ministers backed a German EU presidency initiative on gradual progress towards a European Institute of Technology. In a public hearingCommissioner Figel said that it was time for the initial EIT plans to reach a conclusion. He claimed that there is a positive momentum now: “either we get it now or it’s lost”.

Obviously I was surprised to read nothing about the Soete/Tindemans study in the report of the hearing. As far as I could see, the design and organisation of the EIT presented in the hearing was exactly the same as the one suggested by the Commission before the study was published. This is all the more surprising considering that the research was conducted for a committee of the European Parliament. Of course government bodies are not obliged to follow the recommendations of reports that they have commissioned. But you would expect that it would at least be taken into consideration, especially since the authors are well known and respected researchers in this field.

This seems to be a typical example of the political (ab)use of policy research and policy analysis. If the results and recommendations are politically opportune and correspond with the politicians objectives they are praised and heralded as ground breaking landmark studies. If not, let’s just neglect them and get on with what we planned.

You would at least hope that decision makers on research policies in Europe would take research seriously…

Science 2.0

Posted by Eric on April 17th, 2007

One of my first posts in this blog was on the iBridge Network, a platform for searching and sharing innovations in universities. Universities can use the platform to license and distribute a variety of items, including software, research tools, databases, teaching materials, surveys, and reference materials.

Obviously I was surprised to read on the URENIO website that the iBridge Network was launched at DEMO 07 in January of this year. Well, it appears that the event I posted about 18 months ago was the announcement of the network, while this was the launch of the actual website and platform.

Laura Dorival Paglione, Director of the Kauffman Innovation Network, which manages the iBridge Network explained in her presentation: (b.t.w. sounds a lot like what the CEO was saying 18 months ago doesn’t it? ;)

“Universities are tremendous wellsprings of knowledge. By encouraging widespread access to information and linking researchers with interested parties, we are hoping to more fully realize the innovation potential that research offers.”

The platform started as a pilot for five universities: Washington University in St. Louis, University of North Carolina at Chapel Hill, Wisconsin Alumni Research Foundation, Cornell University and the University of Kansas. The University of Chicago and the University of Arizona have joined a few months after the announcement.

I was a bit skeptical in my first post on this service. Looking at the website now, I think that it might eventually work. A video presentation is available at the DEMO 07 website. With all the share and collaborate features, tag clouds, categories and of course the ubiquitous ‘beta’ indication it looks a lot like Science 2.0. But like any Web 2.0 application, it will be very much dependent on the ‘user generated content’. Let’s see in another 18 months whether scientists are ready for science 2.0…

Yet Another EIT (or EITs)?

Posted by Eric on April 16th, 2007

A study team led by Peter Tindemans (former Chair of the OECD Megascience Forum) and Luc Soete, Director of UNU-MERIT, a joint research and training centre of United Nations University and Maastricht University in the Netherlands) has proposed yet another structure for the European Institute of technology.

Originally proposed by Commission President José Manuel Barroso as part of the relaunched Lisbon Agenda, the aim of the EIT is to strengthen the European ‘knowledge-triangle’ of research, education and technology. The European Commission first expressed a preference for the EIT as a single institution. After a consultation of a wide range of stakeholders it proposed (pdf) a decentralised network structure in October 2006.

This EIT is organised around six Knowledge and Innovation Communities (KICs). These KIC’s should be seen as joint-ventures of partner organisations representing universities, research organisations and businesses which are intended to form an integrated partnership in response to calls for proposals from the EIT.

Tindemans and Soete find that the decentralized EIT that has been proposed by the Commission is found to be not feasible. It is too dispersed; it would not increase significantly the research output in a field; it cannot match a top tier university in providing an environment for training graduates; and a dispersed institute cannot adequately organize technology transfer. Instead of the decentralised model, they propose a clustered model. One of the major implications seems to be that there will be multiple EITs and that they will be more geared towards the regional context.

While they acknowledge that the underlying rationale for setting up the EIT is critical, they caution against making blanket assumptions about Europe’s inability to convert knowledge into commerce, to organize critical mass, or to reward entrepreneurship and excellence in research and education. The study team cites evidence from the latest European Commission Innovation Scoreboard, which found that several of the smaller European countries and Germany perform significantly better than, or as well as the US and Japan (see below). Not all EU countries, regions and institutions have problems with converting knowledge into commerce and critical mass, rewarding entrepreneurship and excellence in research and education. The authors warn that ignoring this fact might result in assuming too easily that a European level institutional solution is necessary in cases where national or regional approaches might be more appropriate.

(click to enlarge)

The report proposes an alternative that does support existing local strongholds in research, education and innovation. This so-called Cluster EIT would see ambitious and successful regions and universities compete to create strong institutes of several hundred staff at or linked to a strong university, and working closely with industry on problems that determine long-term industrial development. In the case of the US such institutes too are concentrated around elite institutions such as Massachusetts, Stanford, Austin and San Diego.

Another interesting point made by Soete:

“Nobody in the US would think of establishing an AIT (American Institute of Technology) so if we think of creating a European Institute of Technology it should recognize the present strongholds in research, in graduate training and in innovation. Otherwise, it will represent little more than what the French call ‘un saupoudrage’ of undoubtedly substantial additional research monies but which spread over such a wide number of research centres will barely make an impact.”

In their report (pdf) they further explain their recommendation for a ‘cluster EIT’ and also provide the financial aspects of this organisational form (see also the news item from Euractive). I only had a quick look at the report but at first glance I think they make some good points. It seems that the role of the Commission would become more distant in this proposal, while the regions would become more involved in the development of the EITs. I wonder how the Commission will react to these suggestions. A public hearing on the EIT takes place in the European Parliament on 8 May this year.

Universities and Regional Development

Posted by Eric on August 22nd, 2006
Austan Goolsbee (a professor in economics at the University of Chicago) advises regions in the US to think twice about jumping the ‘Sillicon Valley Bandwagon’. In an article in the New York Times he claims that funding local universities as a strategy for regional economic development is not likely to work. The need for caution is based mainly on the mobility of graduates and researchers.

Students from local colleges, frequently move out of state when they graduate:

If Stanford can hatch world-famous companies around Palo Alto, politicians assume, their colleges can, too. But with so many trying to spin universities away from their traditional academic focus into engines of economic development, it is worth considering whether investing in local universities can achieve that goal. This strategy is based on the view that research done by professors can form the basis for local start-up companies and that the graduates of the university can supply the entrepreneurs and employees.

But advocates should remember an old maxim of economic development: Beware of investing in things that can move. As it turns out, graduates and research ideas both tend to move around a lot. Subsidizing teaching is problematic as a development strategy because graduates frequently move out of state.

And ideas and inventions – even in the form of patents – are of little use when the scientists that invented them, leave. Or in the words of Lynne Zucker and Michael Darby, when they become ‘disembodied discoveries‘:

They looked at such factors as having successful patents at universities or where highly influential science articles had originated. They found little evidence that the ideas helped local businesses any more than businesses in other areas. The one thing the study does find to be consistently associated with high-tech start-ups is the presence of star scientists – not the ideas, which can be copied, but the scientists themselves. This seems to be the one way in which a university can be used as an engine of business growth.

The importance of star scientists brings Goolsbee to the comparison with American Baseball:

Trying to make some town into the next Silicon Valley by attracting the best scientists is rather like trying to start a new baseball team and turn it into the New York Yankees. If dozens of sports-mad billionaire team owners can’t do that, how easy would it be for the economic development office at the University of Texas, Arlington?

What is worse, it is a safe bet that as these development incentives become a primary motivation for financing higher education, the competition among universities for stars will start looking much more like today’s baseball scene. Ambitious state university systems will find it easier to steal the stars of another team than to develop their own prospects. As a result, salaries will go through the roof – just as in baseball. And while everyone pays more, only a tiny number of cities will ever win the World Series. One will increasingly hear about how the costs of college are rising everywhere and that local economies have little to show for it.

The university’s role in regional development is popular issue in higher education and innovation policies around the world, especially in Europe. So will these arguments be valid for other countries as well? I think it depends a bit on how you define the region. In the narrow definition of regions this can be the case. For instance, supporting a university in northern Finland might benefit the Helsinki region in the south more than the investing region itself. So yes, the local and regional governments should think about these arguments when planning for their own Silicon Valley. However, because the funding of universities in many countries comes to a large extent from national sources (not local or regional) the creation of these high tech areas are usually elements of a larger national innovation policy (especially in smaller countries).
If we compare the US states with countries, the mobility of graduates and star scientists might present a serious problem. If star scientists and graduates move to other countries, the national investments in these graduates and in the research of the scientists will not benefit the investing country but the host country. On the other hand, I think the mobility of graduates and scientists between the US States and between the US universities is significantly higher than between other countries and their universities.
Maybe the concept of the ‘star scientists’ is even very American in itself… One thing is for sure. Luring top scientists with the salaries of baseball players won’t help a lot outside the US. The salaries of football players and a comparison with the Champions League might do a better job at that.

Outsourcing Homework

Posted by Eric on May 16th, 2006

The Washington Post reports on another industry that is feeling the effects of outsourcing: education, and tutoring in particular. In the US, there are millions of dollars available under the No Child Left Behind Act to firms that provide remedial tutoring. And where there’s money, there’s people that want to make more money. And where people want to make more money, they need to lower the costs (click picture for enlargement):

When, a Chicago-based tutoring company with more than 6,000 clients, advertised in Bangalore for tutors with master’s degrees, more than 500 people applied for 38 spots, according to Bikram Roy, the firm’s founder and chief executive. “There is just a huge hotbed of talent there in math and science,” he said. “India has the best tutors — the best teachers — in the world.”

Amita (15) for instance is being tutored by Lekha,

a $20-an-hour tutor who helps Amita with her geometry homework during twice-a-week, one-hour sessions. Using an electronic white board and a copy of Amita’s textbook, Kamalasan guides her through the nuances of cross-multiplication, triangle similarity and assorted geometry proofs. Amita is one of 400 students enrolled with Growing Stars, a California-based company whose 50 tutors, most of them with master’s degrees, work in an office in Cochin, India.

The demand for overseas tutors in the United States is creating a thriving industry in India. According to Educomp Solutions, a tutoring company in New Delhi, 80 percent of India’s $5 million online tutoring industry is focused on students in the United States. But it doesn’t stop with tutoring:

Some companies are thinking of educational outsourcing on a much broader scale than just tutoring. The Kentucky Community and Technical College System is outsourcing the grading of some papers to Smarthinking, a District-based online tutoring company that works with 70,000 students at 300 schools across the country and has both tutors in the United States and abroad. “Essentially we are acting as the teaching assistant,” said Burck Smith, the firm’s chief executive and co-founder. Right now, about 20 percent of Smarthinking’s 500 tutors are in countries such as India, the Philippines, Chile, South Africa and Israel.

As is the case with the outsourcing of the automobile industry, of tax returns and of drug trials, this form of outsourcing also has its critics. Rob Weil of the American Federation of Teachers, for instance:

“We don’t believe that education should become a business of outsourcing. When you start talking about overseas people teaching children, it just doesn’t seem right to me.”

A rather surprising statement for someone from the largest education exporting nation in the world…

Publishing & Open Access

Posted by Eric on May 15th, 2006

Two related issues on the US academic publishing business were widely reported upon in the media in the last 2 weeks. The first was the National Institutes of Health policy on public access to research findings. The second, the proposal of a bill by Republican Senator Cornyn (Texas) and Democratic Senator Lieberman (Connecticut) requiring public access to federally funded research.

On February 3, 2005, the National Institutes of Health (NIH) announced a Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research. Although the NIH strongly encourages that a manuscript be made available to other researchers and the general public immediately after it has been published in a journal, the Policy allows an author to delay the manuscript’s release for up to 12 months. Participation in the Public Access Policy is voluntary. The rate of submission to the system in the first 8 months has been less than 4 percent of the total number of articles estimated to be eligible.

The Chronicle however reports that momentum continues to build outside the NIH, and outside the United States, for mandatory posting of manuscripts in centralized free online repositories. In April, the European Commission released a report (pdf) calling for a guarantee of free access to all publicly sponsored research.

But in May, the two senators from Connecticut and Texas introduced a bill that would require every federal agency that sponsors more than $100-million annually in research to establish an online repository and make its grantees deposit their articles within six months of publication. The bill would apply to 11 agencies, including the NIH, the National Science Foundation, and NASA.

“It will ensure that US taxpayers do not have to pay twice for the same research – once to conduct it and a second time to read it,” Senator Cornyn told Congress.

Obviously, this proposal ignited a fierce reaction from the scientific publishing industry. Representatives from the publishers come with all kind of reactions:

Science addresses this issue:

Some publishers argue that there’s no evidence the public is as interested in, say, high energy physics papers as in health research. “You’re just expanding this willy-nilly on the assumption that there’s the same clamor,” says Allan Adler, vice president for legal and governmental affairs for the Association of American Publishers. Martin Frank, executive director of the American Physiological Society, argues that if the bill became law, it could be especially damaging to “small niche area” journals in disciplines such as ecology that have not yet experimented much with open-access journals that recoup publication costs from authors rather than subscribers.

And so does the New York Times:

Scientific data is easily misinterpreted, said Joann Boughman, executive vice president of the American Society of Human Genetics, publisher of The American Journal of Human Genetics. “Consumers themselves are saying, ‘We have the right to know these things as quickly as we can.’ That is not incorrect. However, wherever there is a benefit, there is a risk associated with it.”

And the Washington Post:

Patricia S. Schroeder, president and chief executive of the Association of American Publishers, promised a fight. “It is frustrating that we can’t seem to get across to people how expensive it is to do the peer review, edit these articles and put them into a form everyone can understand,” Schroeder said. [Isn’t the peer review something that academics do…for free…? Ed.]

And the Guardian:

But the Association of American Publishers warned that the law would jeopardise the integrity of the scientific publishing process. Association member Brian Crawford warned it “would create unnecessary costs for taxpayers, place an unwarranted burden on research investigators, and expropriate the value-added investments made by scientific publishers, many of them not-for-profit associations who depend on publishing income to support pursuit of their scholarly missions”.

I guess there are a lot of vested interests here.. The bill will probably discussed later this year. It would be about time for some fundamental changes in the publishing industry. To me it remains a strange phenomenon that an academic writes an article or book for free, then his or her colleagues do the peer review for free and then (often after 2 years or so) they have to pay to get (on-line) access to the articles or books. Or do I fail to see something here?

So that’s Nanotechnology

Posted by Eric on March 14th, 2006
Nanotechnology is right up there in the technology hypes, next to biotechnology and information technology (or are we passed that?). Reading this post on your laptop or PC, the results of information technology are hard to ignore. The results from research in biotechnology are maybe harder to grasp, but still pretty obvious (medical applications, food, agriculture). But nanotechnology?

Time for some nanotechnology 101:

Nanotechnology is the art and science of manipulating matter at the nanoscale (down to 1/100,000 the width of a human hair) to create new and unique materials and products. An estimated global research and development investment of nearly $9 billion per year is anticipated to lead to new medical treatments and tools; more efficient energy production, storage and transmission; better access to clean water; more effective pollution reduction and prevention; and stronger, lighter materials. And these are just a few of the more significant ways in which people are discussing using the technology.

If that didn’t help: Promotheus‘ weblog had a post on the release by Woodrow Wilson Center’s Project on Emerging Nanotechnologies of an inventory of 200 existing consumer products that claim to incorporate nanotechnology. From chocolate to processors, from sunscreen to washing machines. Click on the picture or go to this BBC-site for the nanotech future. Ever wondered why the iPod Nano is called the iPod Nano?

Technonationalism and Economic Globalism

Posted by Eric on March 9th, 2006
This month’s Far Eastern Economic Review featured an interesting article about Asia’s nationalist policies in the globalised field of science and innovation. Here are a few sections, but read the full story here (free access).

P.V. Indiresan, the former director of the Indian Institute of Technology Madras: “The future of both China and India is at risk, because neither owns the technology it operates; the intellectual property continues to remain in the West. The short answer to this problem is that we should develop our own technology; we should acquire so much intellectual property that the West will be as much dependent on us as we are on them.”


There has been a real effort to reach out to Asian diasporas in places such as Silicon Valley and Cambridge University. Successful Chinese, Korean, and Indian scientists are being successfully lured back to their home countries to new labs in new research centers stocked with the most advanced equipment. The Shanghai and Beijing municipal governments offer returning technology entrepreneurs tax breaks, subsidized office space and access to government-investment funds.


Mr. Wen’s (Premier Wen Jiabao of China, Ed) January speech about ‘independent innovation’ was accompanied by commentaries in Science and Technology Daily that quickly pointed out that self-reliance did not signal the abandonment of the ‘open door’ policy and that ‘independent’ did not equate to ‘insular’ or ‘closed’. Domestic firms themselves, moreover, have business strategies that may conflict with nationalist goals.

The very forces of globalization that are encouraging such knowledge transfers, however, are also undermining the abilities of Asian nations to effectively implement technonationalist policies or any top-down development strategy, for that matter. WTO restrictions on import quotas, tariff barriers, and export subsidies have gradually created more open and market-oriented economies. As a result, policy makers have gradually replaced state-led, highly centralized models of technological innovation with a more flexible and open system, increasingly dependent on foreign enterprises. As they have globalized, Asian societies have become less susceptible to top-down direction.


The twin forces of nationalism and globalization could, however, push in opposite directions. Changes in the security environment are the most likely scenario that would lead policy makers to more forcefully control the free flow of ideas or talent. Already worried about the rise of China’s military power, the U.S. defense and commerce departments are currently considering new regulations limiting the ability of foreign students and researchers to work with information and technology that is export-controlled. Job loss in developed countries, especially among knowledge workers believed to be immune from the vagaries of international competition, could generate a backlash against globalization. A failure of Asian firms to actually work their way up the value chain and begin to control proprietary technology may also cause decision-makers to question whether they can truly break free of dependence on Western technology through integration with the global economy.

It will not be surprising to see innovation and technological challenges arising from countries not historically known for their scientific prowess. While globalization is a part of this story, an important and often overlooked element of this story is the nationalist agenda promoted by Asian states. The world may be flatter, but it is still populated by nation-states seeking to increase their wealth, power, and status.

Technology Transfer and the Ownership of Science

Posted by Eric on March 6th, 2006
The Association of University Technology Managers represents professionals in the field of technology transfer and tries to develop and promote best practices in the profession. Universities have seen a significant increase in technology transfer activity. Before 1980, fewer than 250 patents were issued to U.S. universities each year and discoveries were seldom commercialized for the public’s benefit. In contrast, in 2002, AUTM members reported that 4673 new license agreements were signed. Between 1991 and 2002, new patents filed increased more than 310 percent to 7741 and new licenses and options executed increased more than 365 percent to 4673.

The AUTM contributes much of the success in university technology transfer and the resulting economic and health benefits to the Bayh-Dole Act of 1980:

Co-sponsored by Senators Birch Bayh and Robert Dole, the Bayh-Dole Act enabled universities, nonprofit research institutions and small businesses to own and patent inventions developed under federally funded research programs. Before the passage of this legislation, new discoveries resulting from federally sponsored research passed immediately into the public domain. The provisions of the act, however, provided an incentive for universities to protect their innovations and, therefore, for industry to make high-risk investments resulting in products made from those innovations.

In 2005, the AUTM launched The Better World Project to explain in everyday terms how academic research and technology transfer have changed our way of life and made the world a better place (their words, not mine, ed). Recently they issued two reports that provide information on technology transfer projects ranging from Honeycrisp apples, Google, the V-chip, nicotine patches and Taxol. The reports are available online:

Technology Transfer Stories: 25 Innovations That Changed the World (1 MB)
and the other one:
Technology Transfer Works: 100 Cases From Research to Realization (1.2 MB)

In 2004, two institutions in New York City accounted for about 20 percent of all revenues reported. Columbia University earned more than $116-million, and New York University reported earnings of more than $109-million. The concentration of licensing revenue among a small number of universities is typical. Eight institutions accounted for more than half of all revenues reported. At least 22 institutions besides Columbia reported earnings of $10-million or more.

Universities share proceeds from commercialization with inventors. Although formulas vary, inventors typically receive about one-third of the total. In many cases, additional allocations from the institution’s share go to their school, department, or laboratory.

Obviously, allowing universities to generate profits for themselves and the companies that license the inventions, while the research is funded by tax-payers, does raise questions and criticism. Who should own science? In the past years, several books have been published that critique the commercialization of research (and other academic capitalist activities in the knowledge factory / university in ruins) or at least point to the risk of the market or the paradox of the marketplace.

Despite all the criticism, the US approach to technology transfer is still used as the model for many non-US universities. Their approach is increasingly being copied in countries in Europe and Asia and other parts of the world.

Get Adobe Flash player