Do you factcheck news stories that confirm your beliefs?

Environmental Graffiti posted an interesting story about a hoax written by Mark Twain in 1862 about the discovery of a petrified man. The story was widely copied and reprinted even though basic facts were evidently wrong.

Why am I sharing this? Besides being amusing, I think it teaches a valuable lesson about our predisposition to accept stories and theories that confirm our own preconceived ideas and biases. An example in development policy is the tragedy of the commons that still today is used to justify the dismantling of local (often collective) property rights systems in favor of individual, exclusive property rights.

Let’s question our assumptions before we take important decisions especially when they have an impact on others.

Advertisements

Three lessons from a year of teaching 2.0 to researchers

The purpose of this post is to share with you three lessons we wish someone had told us a year ago. But then again, what’s the point of teaching if you don’t learn something for yourself?

Last summer, some colleagues at IFPRI and I decided to begin offering a series of weekly trainings aimed at teaching researchers about new web-based tools and services. During the first several months, this is exactly what we showed- tool, service, tool, and so on. Staff who participated in these early trainings would later report that they had hardly heard of, let alone used, many of the tools and services we were showcasing- wikis, del.icio.us, iGoogle, etc. They also would reveal that few continued using them in the months that followed their first taste of the new tools and technologies.

So we did what people normally do when they get really busy for awhile- we continued teaching the same lessons in the same style until we had some time to calm down and reflect a little. Finally, we began to ask why more staff wasn’t using these tools on a regular basis. And why we weren’t able to attract more research staff to the trainings. We knew these were directly linked, and began to explore new approaches for reaching our target audience. Below is a summary of some of the more important lessons we’ve learned so far, along with the stories behind them.

  1. Focus on the job, not the tool. The first couple of times we talked about social bookmarking services with researchers, we showed them del.icio.us. In fact, we showed them how to create an account, how to import their browser’s bookmarked pages, and briefly explained about tagging and how to share resources with friends and colleagues. At the end of the session, we basically just told the researchers to go to it. A handful of researchers later asked us for help setting up their accounts. Few, however, reported that they were still using the service months after the training.
    What went wrong? Well for starters, we were focusing on the tool rather than the application. Turns out, researchers wanted to see how this tool could be applied in their daily lives. Otherwise, their interest in the tool quickly passed. The leadup to a major international conference on “Taking Action for the World’s Poor and Hungry People” turned out to offer a perfect opportunity to showcase one strength of social bookmarking services- the ability to create collaborative lists in real time. In years past, organizers of such events spent months and months contacting leading researchers asking them to submit lists of important works related to the conference as well as publishers to request permissions to make these texts freely available to audiences in the developing world. Prior to this event, targeted individuals received an email invitation to submit their lists electronically and were given three options for doing so- emailing in their entries, filling out an online form from a website or using their own del.icio.us account and a tagging their recommended papers with a common keyword (food4all). Though the majority of submissions were collected via email or online form, our del.icio.us page became the central repository for these resources and was used to publish the bibliographic list onto the conference website. When we showed this application to researchers in subsequent trainings, it provided a concrete example of what the service could be used for in supporting their own work. And, not surprisingly, more researchers got on board and have begun using social bookmarking in their daily lives since then.
  2. Researchers like hearing from other researchers, not us. Our first couple of sessions about blogging were well attended by research staff, but few expressed interest in setting up their own blogs. Once again, this had us scratching our heads as we tried to figure out why blogging wasn’t catching on among staff. Our approach was to present blogs as a website-in-a-box that anyone could set up in a matter of minutes and showed how many millions of blogs were started by “regular people” every month. So it seemed to be another case of focusing too much on the tool rather than on how it can be used.
    Yet in subsequent presentations, we began showcasing organizational blogs from IFPRI and other research organizations and still few seemed interested. Fortunately for us, though, we were able to capture the attention of a couple of younger researchers during these early trainings who would later take blogging at IFPRI to new lengths. Eva Schiffer, a post-doc who developed a social networking analysis tool, thought a blog would be ideal for sharing ideas and applications for her tool with the wider research community as well as on-the-ground development workers. Soon, the number of entries and amount of traffic from Eva’s Net-Map Toolbox blog had surpassed that of IFPRI’s other blogs, and we invited Eva to present her experiences with her colleagues at IFPRI. During her presentation, Eva explained how the blog connected her to new audiences of readers and that her research actually benefited from the online exchanges with these readers, many of whom included other researchers and development workers engaged in similar issues. Truth is, Eva’s story wasn’t all that different from our own adapted sales pitch- that researchers were using blogs to reach new audiences that didn’t visit our organizational website and that these new audiences often were looking to actively engage in creating knowledge rather than passively receiving information- but the fact that the message was passing from one researcher’s lips into the ears of her peers seemed to make the difference. Several staff approached us following the presentation requesting that we help set up their own research blogs. Go figure.
  3. Don’t assume you know what researchers need- go out and ask them! I saved this one for last because, truth be told, we’re only now just starting to move in this direction. Or rather, we’ve been asking them what they want to learn for some time now and we typically hear them recite back to us the list of tools we’re already presenting. For a while, we took this as a sign that we were doing everything right, but then we started to wonder whether or not we were asking the right question. Or, put another way, were they saying they wanted to know more about blogs and wikis mainly because they knew that’s what we could teach them or because they suspected that these tools would help them in their work? Based on how few were actually starting their own blogs and wikis, we had to assume that the former was true.
    We began asking ourselves how we could find out what researchers needed in a different fashion. So we decided to rephrase the question – What are some common communication bottlenecks you face in your work? Many complained of email overload. Others expressed the need for collaborative work spaces for posting data, figures and working versions of research papers for sharing among colleagues and project teams. All this has led us to the point where we are now testing out several content management systems that support the type of functionality researchers have requested. And it seems unlikely that we would have arrived here so quickly had the researchers not shared with us information on what they needed.

All told, we’ve learned quite a bit from our experiences over the past year (and maybe even more than we’ve taught). And I’d like to be able to tell those of you interested in implementing similar trainings to simply follow the tips I’ve shared above and your organization will be Web 2.0 savy in no time. But with all change, these things take time. Having another year under our belts of not just training but also implementing these tools and services in our daily jobs as well as in our personal lives probably has just barely laid the foundation really getting our hands dirty and supporting researchers eager to swallow up knowledge and information on working with new web tools and services (see Stephan’s last post on project management 2.0). In the meantime, keep us posted if you have any tips or “best practices” for teaching 2.0 in your organization and we’ll do the same as new ones pop up.

Crafting an Intranet 2.0: If you build it, will they come?

First, a disclaimer: we haven’t built anything yet. Unlike other posts, I can’t share any examples of what we’ve done so far toward building a collaborative intranet since we still are very much in the planning phase. That being said, however, I think it’s still an opportune moment to reflect on some lessons learned and solicit advice on what others think about our proposed ideas. After all, if knowledge sharing has taught us anything thus far, it’s that we all have something to learn from one another…

Recently, I was checking out a Knowledge Sharing Wiki, which mentioned four different applications for institutional intranets. They included:

  1. Document sharing across an organization;
  2. Organizational staff directories;
  3. Online conversation space; and
  4. Centrally organized company policies, human resources information, etc.

Of the four, I would label 2 and 4 as the more conventional intranet functions while an increasing number of organizations (IFPRI included) now are clambering for 1 and 3- though not always through the intranet platform. For us, the question quickly became: Why not combine all four to create an all-in-one intranet?

When we first brought up the idea of having features such as customizable staff bio pages and RSS feeds on the new intranet platform, it was met with much skepticism from our IT department. “No one will use it” was their short answer to our proposed ideas. After conceding that it would take time for most staff to become actively engaged, we pointed out that many staff already have such profiles in Facebook, LinkedIn and other social networking software and some are using newsreader software such as Google Reader to stay up-to-date on current journal articles and websites of interest. Rather than building a new Facebook-like platform, we are proposing allowing staff to simply be able to update their bio information on the fly (they currently do so via web-based forms in Access), parts of which would also be automatically published to their public profile on the web. This, we argued, would reduce the time and effort of updating these pages in multiple locations while also giving staff more ownership and greater incentive for keeping the content current. Default content for these pages would simply be imported from the current staff directory, thereby avoiding the duplication of data entry and leaving it up to individual staff members to decide when to adopt the new way of updating their bio pages. Moreover, the addition of RSS feeds and the ability to follow their colleagues’ updated information would create a social networking type environment that would facilitate internal communication.

For online communication purposes, an internal blog using WordPress was launched last year (along with several public blogs), which is now featured on the intranet home page and used by staff for staying up-to-date on both work and non-work related news. Our goal is to fully integrate the blog into whichever intranet platform is decided upon (IT currently favoring Sharepoint due to its ability to be integrated with the Active Directory) and to make the posting of announcements and events as easy as posting a blog entry. Moreover, this type of information also would be shareable via RSS, calendars, etc.

As for document sharing and the online storage of company policies, HR info, etc., some of this already is being uploaded into Sharepoint, which seems to be able to handle document and form libraries rather well, includes RSS feeds, and supports full-text searching. Our concern here (and it’s a big one) is that Sharepoint does not perform well in low bandwidth environments, such as those faced by most of our outposted staff (see KM4dev online discussion of Sharepoint). Other document sharing platforms currently being used by IFPRI staff include Teamspace, wikis and Google Docs & Spreadsheets. Rather than limiting all staff to using a single platform, our idea is to be able to link to all of these services via the intranet portal. In the case of Teamspace, integration with Sharepoint would be quite straightforward while the different wiki platforms could either be integrated directly into whichever platform is used or simply by having their content displayed on a given page either via an iFrame or embedded RSS feed.

In sum, although no single tool or platform fits all the needs expressed by staff and management, Web 2.0 applications allow for outside services to be pulled in, remixed and displayed in various ways within a dynamic intranet platform. These new developments have caused some to predict that the lines between intranet and internet will become blurred and that the “classical intranet” will become history in a few years. At IFPRI, we are banking on such predictions coming true, taking stock in the idea that if information is easier to find, update and share, user behavior will adapt accordingly.

Does the social web enable me to find more unique information or just more of the same?

The video Information R/Evolution and the post From The Information Age To The Connected Age are describing two trends of the social web.

The social web can enable users to more easily find the exact information they were searching. But if the scarce resource of the connected age is attention, then we are likely to see a surge of new ways of and tools for marketing and PR in the battle for audiences.

As producers of information and products learn how to use the social web to reach people, these same tools will be weakened in their function to facilitate access to the best information in favor of the loudest voice. How do we know that the loudest voice is the one we want to listen to? and if it tells us what we want to hear, why would we not listen or even go look for an alternative one?

Research debates have very similar problems: Big names are favored and have a lot of influence, newcomers and dissidents have to accept the supremacy of the established voices. But, do the big names always produce the best information? or did were they just lucky to have one great idea that made their reputation?

Are we already practicing web 3.0?

I just finished reading an article about web 3.0 on ReadWrite Web where web 3.0 is defined as being “about feeding you the information that you want, when you want it (in the proper context).”

Web 2.0 made it much easier to share and search information, but it also led to information overload for many users. Following the above definition, web 3.0 will be about personalization and recommendation. While it would be great if this really became possible technically soon, I am wondering if many of us are not already practicing web 3.0 as knowledge brokers in our respective niches.

I, for example, play the role of a filter and aggregator of information within CAPRi: After searching, receiving and digesting all the information that might be relevant to our network (i.e. a lot of futzing) I filter it down to the bits and pieces I feel are important to share. It seems that people are appreciating my work, but for this system to work, I have to have credibility and people need to trust my judgment.

Will semantic tools and their algorithms ever be trusted in the same way we trust our friends, colleagues or a blog we love to read? Will there be the continuing need for a person’s involvement? What does this mean for outreach of research and others who create new information?

Measuring impact on the web

Like many development research organizations, IFPRI staff often debate the question of how to best measure the impact of our work. As opposed to some of our partner CG Centers, IFPRI’s work focuses on policy-level interventions, which are rather different from producing new crop varieties for the purpose of increasing agricultural yields, household incomes, etc. Thus, it could be claimed (and in fact has been claimed by some) that IFPRI primarily is in the business of producing “intellectual varieties.” Defining what we produce, however, is quite different from knowing how to measure impact, which effectively underscores the importance of choosing which indicators it’s worth paying attention to. Below I list a few potential indicators for measuring impact on the web while trying to provide my own impressions on their overall value and utility for measuring impact.

  • Page views/visitors: It seems like many webmasters and other mangers of web statistics reports have been questioning the value of these for years, but they continue to be reported year after year. Criticisms include the lack of proper contextualization (from what constitutes a ‘hit,’ ‘visitor session,’ etc. to how do we compare our statistics with others) as well as the lack of any kind of clear correlation between these easily-measurable indicators and others designed to measure impact.
  • Downloads: It seems to follow that the more a particular publication is downloaded, the more people are reading it, thereby expanding its sphere of influence. In practice, however, the same person might be downloading a particular publication multiple times and even more important than the how many question is the by whom question. As with page views/visitor sessions, Google Analytics‘ mapping features can be highly useful in obtaining this type of data to make sure that research outputs are reaching audiences in the countries they’re intended to benefit.
  • Citations: Once again, context is everything. Many research organizations only report on their peer-review publications, and it follows that they are only interested in citations in PR journals, books, etc. Several online databases offer citation tracking information, including Google Scholar, ISI Web of Science, CrossRef’s Forward Linking Program, and CiteSeer, among others. One might be tempted to add these citation figures up in order to obtain a clearer picture of overall impact, but this would not yield such a picture since several of these services count the same citations. Thus, benchmarking your organization’s citations vis-a-vis your counterparts would probably prove more effective in monitoring the impact of your research publications.
  • Mentions in the media/blogs: It also seems like many research organizations have been monitoring this for years, but the addition of services like RSS feeds and automated alerts from Google News has made it much easier than in the past. The same also applies to monitoring the blogs via advanced search operators for Google and Google Blog searches as well as Technorati.
  • RSS feeds: Once again, many are questioning the value of measuring ‘hits’ as an indicator for impact. Instead, they argue, we’d be much better served in paying attention to the number of feed subscribers out there, though this can also fluctuate heavily over time. For our purposes, Feedburner fits the bill rather nicely, even while sticking only to their free, basic service.
  • Search engine rankings: Obviously, there’s one search engine that need not be named (at least, not for the umpteenth time in this post!) that most organizations pay attention to above all others, but few fail to take into account differences across different country platforms. My own advice is to pay attention to your ranking among search results for several keywords your organization may be targeting and look into purchasing ad words for your institution via the Google Grants program (free for non-profits).

GTZ Bulletin on KM

The latest issue of the GTZ AGRISERVICE Bulletin looks at knowledge management. In particular:

1) linking knowledge management to the strategy of the institution (serving targets),
2) developing a culture of knowledge sharing (trust, reward, procedures),
3) involvement and participation of stakeholders (ownership, user logic),
4) capacity development (training, technology, organisational development),
5) contextualisation of information (content, quality, retrieval, communication),
6) monitoring and evaluation (use, impact).

You can download the bulletin here (pdf, 1.88 MB).