World 2.0



Preface

This preface is not a gratuitous introduction to my article, or just a way to justify any possible consequence of my limited knowledge of English language, but a fundamental component of this text because it highlights one of the most critical problems of the web today: the English language as a must to be effectively part of the new Web 2.0 Platform.

English is not my first language, and even if I am an Italian writer, I am not so fluent in English as I am in my own language. So, why did I write this article in English? Because today English is a sort of lingua franca of the web. Most of people who use Internet are able to read English, even if it is not their own first idiom. If you write an article in English, a lot of people will be able to read it and, if the article is a good one, someone may decide to translate it to other languages too. But if you write it in another language, especially a language which is not well-known in the world as Italian, you have little chances it will be translated to English even if is an excellent text. Another advantage is that if you refer to other popular articles — a very common practice in the web — which usually are in English language too, you will be sure that the citations will be exact and not misleading, whereas referencing translations can always introduce some misunderstandings.

On the other hand, writing in a language different from yours has serious disadvantages. First of all you cannot really compete with native writers both in terms of content and style. Your writing style will be poorer by definition, because you are not so confident with that language as you are with yours. Your text could be less readable or even boring to the native reader, as well as misunderstandings will be more probable. Languages are not different just because of words and syntax, but the communication style itself differs from language to language. A plain translation, even if perfect from a syntactical point of view, can prevent the reader to end the article, even if the subject is interesting. From a content point of view, your text could be too simple, childish, because of your limited knowledge of the other language’s vocabulary. In any language there are a lot of words which have mostly the same meaning — we often refer to them as synonyms — but that are not perfectly interchangeable. Using a term rather than another gives to readers a different taste, supports the communications, enriches your message.

Last but not least, if you want to write an article in another language, you must think in that language, and since that is not your own, it will be more difficult for you to write the article, more tiring, biasing your ability to reach the excellence, the final goal of any good writer. Every good article, in fact, whatever is the subject, is a work of art, and every good writer is not glad to publish something that he or she is not proud of.

At last I decided that advantages were higher than drawbacks, so I wrote the article directly in English, asking an American friend of mine to review it and correct at least the most evident mistakes. What is still wrong in, it is my fault, of course. By the way, I wrote all my blog posts in Italian, up to know. This is the first one in English. In future I will probably write other English articles whenever I judge that the content is worth a worldwide visibility.

What is Web 2.0

If you wish to know what is Web 2.0 you probably may want to read the article of Tim O’Reilly. It was translated in various languages, including Italian. The O’Reilly article gives a very good overview of the major elements which characterize a Web 2.0 site. It does not really provides you with a definition, anyway, but with a list of principles to consider when evaluating if a site can be considered Web 2.0 or not. In fact, according to Tim and other web analysts, a good Web 2.0 site should:

  • provide services, not just a packaged software, and ensure cost-effective scalability,
  • be based on unique and hard-to-recreate data sources that get richer as more people use them,
  • trust users as co-developers,
  • harness collective intelligence,
  • leverage the long tail through customer self-service,
  • be potentially deployable on any device,
  • provide users with lightweight user interfaces, development models, and business models.

This is a good practical approach, often used in Physics and other sciences, and it is called an operative definition, that is, a definition of a concept which explains how it may be observed rather than what it is. Colloquially, an operative definition tells you «how to know it when you see it».

However the web is not something ruled by natural laws, but a continuously evolving environment. The operative definition of a physical event may last for years, it is a long term definition, and changes only when new tools are developed to measure new parameters which totally or in part substitute for the old ones. But what we call Web 2.0 is evolving so fast that in theory we should speak of Web 2.0.1, 2.0.2, 2.0.3, and so forth. That is, an operative definition of Web 2.0 risks to be continuously updated, constraining its intrinsic value. This is in contradiction with the principle reported by Tim, that to Web 2.0 does not apply anymore the traditional Software Life Cycle, but what we could call a Continuous “Beta” Development. So we need a more general definition which highlights the distinctive factors which characterize Web 2.0.

In addition, Tim’s article is very focused on specific sites and companies, but Web 2.0 is not only based more and more on mashup, but most sites which aggregates data from other sources are themselves becoming new sources of data because they build value as a side-effect of ordinary use of their applications. So Web 2.0 is getting more and more a wide platform where prosumers are the new actors of the web, and companies as Google or eBay are drivers and facilitators. The user is no more external to the system, but integral part of it. This is not a totally new concept, but it is well known by ICT specialists as a characteristic of Knowledge Management Systems.

So, how we should build a new definition of Web 2.0 which is not limited to how web is evolving today but which establishes long-term principles for a continuously changing environment? In my personal opinion it should state:

  • what we are speaking of, that is, what is the object called «Web 2.0»,
  • what is it for, that is, the purpose or the reasons for its existence,
  • how it works, and which architectural design it is based on.

So I developed several definitions in the last few months, but none was satisfying until I developed this one. Of course, I do not pretend it is the definitive definition of Web 2.0, but only my personal two-cents contribution to understand what I consider not just a technological evolution, but a global social event which is going to affect hundreds of millions of people in the world.

Web 2.0 is a knowledge-oriented environment
where human interactions generate contents that are
published, managed and used
through network applications
in a service-oriented architecture.

Let us go into this statement thoroughly. First of all, what is Web 2.0: a knowledge-oriented environment. Not a site, not a server or a bunch of servers, not a single community or a team. It is an environment. An environment is more than just a platform. A platform is the foundation of an environment, but an environment is an autopoietic ecosystem which involves many different actors in various levels which interact each other.

But any ecosystem is based on rules. For example, natural ecosystems are based on the principles of surviving and of natural selection. The sustaining principle which is at the base of Web 2.0 is knowledge. Knowledge is more than just information, as well as information is more than data. There are several definitions of knowledge. My favorite one, is the following:

Knowledge is the correlation of data and pieces of information
with personal or group experiences and lessons learned,
which creates a new partial awareness.

In practice, there is no knowledge unless there is somebody who can use it. In contrast, information is just an ensemble of data associated to a specific context, which exists independently from somebody who may use it. This definition has an important implication: it is not possible to store knowledge as we do with information. Often you may have heard of knowledge bases as databases where knowledge is stored. In my personal opinion, knowledge bases do not contain real knowledge, but pieces of pre-digested knowledge which becomes real knowledge only when they come into contact with a human being, that is, an intelligence. In fact, whatever you read, may or not become knowledge depending on who reads it. The same pieces of information given to several different people may take those people to different or no conclusions depending on their skills and experience.

Several years ago I developed a definition for intelligence, an operative definition, by the way, which I consider particularly useful when we have to understand how people create, acquire, and uses knowledge.

Intelligence is the ability to perform correlations
among various pieces of information and experiences.

In practise, given an ensemble of pieces of information and experiences, the faster you are able to correlate those pieces among them, and the wider is the resulting network you are able to manage, the more intelligent you are. I like this definition because it is not strictly related to logic and to the rational side of our brain. I did not specify that correlations have to be logical links. Any kind of correlation applies. So, it can refer to the way an artist create a painting, or a composer creates a melody. We can apply this definition to artistic intelligence as well to scientific one, to the right side of brain as well to the left one.

You can easily see a link now between my definition of intelligence and the previous definition of knowledge I mentioned. To efficiently and effectively create, acquire, manage, and use knowledge, you must be really clever. It seems a trivial statement, but it is simply a direct consequence of both definitions, which are not trivial at all.

And this takes now to the second line of my Web 2.0 definition: the Web 2.0 environment is where human interactions generate content that can become knowledge when put in contact with people. It is not simply a transfer of knowledge. The resulting knowledge it is not necessarily the union of the know-how’s which generated that content, but since it depends on the peculiar characteristics of the recipient, it may be something new. This is the strength of Web 2.0: whatever idea you put in the cauldron may be picked and generate a new idea or be changed in such a way that you hardly may recognize your original one.

Pieces of knowledge? Content nodes of a large growing network? Skill? Experience? Let us go back to my definition of intelligence: the ability to create a network of correlations between nodes representing pieces of information and experiences. Can you see the parallel with Web 2.0? So the concept of collective intelligence naturally arises. The faster and wider we can create valuable links between pieces of information in the web, the higher is the collective intelligence of the network.

Now, let us consider the following definition:

Knowledge management is a discipline whose goal is to ensure that
the right information is available to the right person
just in time to make the best possible decision.

We could speak for hours about the meaning of this definition, especially as far as the term «right» is concerned, but I would like now to put your attention on the last few words: to make the best possible decision. This is why we must know: to decide. Every decision requires having enough knowledge to take it. The more the knowledge, the more reliable will be the decision. This applies to Web 2.0 too. That is why in my definition I speak of publishing, managing and using the content generated by human interactions. Wikipedia would have no meaning if nobody used it, nor would the PageRank of Google or the eBay Feedbacks be of any value if they would not be useful to decide if a site content or a seller would be reliable.

So we know now what is Web 2.0 and what is is for: a knowledge-oriented environment to share content generated by human interactions. But how does it work? Is that important for its definition? According to Tim, Web 2.0 must be hardware and software independent, so it looks like it is not. However we can describe the architectural principles of a system without linking the system to a specific implementation. Web 2.0 is based on network applications. This is a fact. Even Web 1.0 is based on network applications, but unlike Web 2.0 they are not service oriented. Tim correctly highlights the importance of services as the foundation of a Web 2.0 approach. Of course we can deploy services by using many different architectures, but in Web 2.0, the fact that services are the core of the new Internet, the choice of the standards that are used to implement such an architecture have a consequence. That is why, in my opinion, a real Web 2.0 environment must be based on a Service-Oriented Architecture.

Now the definition is complete: we said what, why, and also how.

The Language Issue

There is however still a consideration left. Let us go back to the preface of this article and to my decision to write it in English. Web 2.0 is about sharing pieces of information and knowledge, but how can one share anything if one cannot communicate? From a technical point of view it is not a problem: the web standards are the success factor of a global network made of heterogeneous hardware and software. But what about natural language? We do not only communicate through a language, but a language is the representation of a culture, a mindset, a lifestyle.

For example, there is no way, at the moment, to add most of non-English books on Shelfari, as well as many other analogous Web 2.0 applications, because most of them rely on the Amazon database. However, Amazon lists only few non-English titles, and in several countries is not available at all. There is no Amazon.it, for example, or Amazon.es. So, if a book has an ISBN beginning with 88, for example, you will not be able to add it to your shelves, not even manually.

Of course, large companies such as eBay and Google are available in many countries and languages, since it is their interest to be as global as possible. However, still in those cases, non-English readers are penalized. For example, most of Google Translate services are from and to the English language. If you wish to translate from Italian to French or from German to Spanish, you do not have a reliable service. Automatic translations are still very raw, but they are in any case a useful tool if you do not know a language at all. For example, if you cannot read Chinese at all, a bad translation will be better that no translation at all. Most of Google Translation services are quite good to understand at least the subject of a text. The translation from German to English and vice-versa is quite reliable, but if you try to translate from German to Italian through English, it is really a mess.

The only Web 2.0 service which is really global is probably Wikipedia with over seven million articles in more than 200 languages, and still growing! The various blogospheres are islands in a ocean, with really few connections to each other. There are really only a few bloggers who write in several languages and fewer who write all their articles in more than one language. But if the French, German, Spanish, Italian, and many other blogospheres have links to English articles, the vice-versa is really rare. Most English bloggers do not care to link foreign articles even if they can read them, because they assume that most of their readers cannot. Most non-English bloggers, vice versa, assume that their readers can read some English. So, there is an evident asymmetry.

So, more and more the English-based Web 2.0 is going to ignore the rest of the web. Language does not affect only blogs or books, but songs, music, lyrics, movies, and every other aspect of social life. This is not typical of the web, however. It is a fact that a significant piece of the book marketplace in non-English countries is based on the translation of English authors. Not just the best sellers, but really many authors of high and medium quality. However, it is extremely hard for a non-English author to be published in USA or UK, for example, unless he/she is really very very popular. The same for singers. How many Italian or French singers are known in USA? In most cases American people still sing French or Italian songs of fifty years ago. They do not know anything of modern singers and modern songs. But American and British singers are well known all over the world, and it is not a matter of quality. English is becoming a killer of many world languages and, as a dramatic consequence, of many world cultures. In the web, especially Web 2.0, this is simply more evident.

However the Web 2.0 could make the difference and dramatically change this trend. How?

The Global Dictionary

Automatic translation of languages is a serious issue. What one can say using a single word in a language may require a longer phrase in another one. Certain terms simply have no translation at all, since the concept they refer to is typical of a specific culture and have no counterpart in other ones. Idioms, jargons, and specialized expressions make automatic translation a mess. Word by word translation is often useless, but also more sophisticated algorithms which analyze group of words and make assumptions on the context, may fail. As I said before, just changing a single word in a phrase by using a synonym, can give to that phrase a different taste, sometimes a different meaning: serious, ironical, playful. Languages continuosly change, and people continuosly create new meanings and variants every day, just speaking or writing. In theory, each of us speaks a different language, a personal one, or gives slightly different meanings to the same term or statement.

From a practical point of view, every pair of languages requires its own dictionary and a complicated set of rules to take in consideration every possible idiomatic expression. In theory, this is necessary even for reverse translation, that is translating from Italian to Spanish, for instance, may require different translation rules and data than translating from Spanish to Italian. It is a huge effort even when languages are similar.

However there is a different approach that could take advantage of Web 2.0: the Global Dictionary.

The idea is to assess all possible concepts. A physical object like a «house» or a «box» is a concept, but also an adjective, like «to be red» or «to be big», an adverb, like «periodically» or «never», a verb, like «to sing» or «to shake». Note that the same concept can be expressed by different words or combination of words, and that the same term can be used to express different concepts either as used as a single word or in conjunction with other terms. So, a census of all possible concepts is a tremendous huge effort, but this is exactly the right work for the Web 2.0 approach.

Every time we assess a concept, we have to define it, and provide a definition in as many as languages as possible. You can think of it as the union of all monolingual vocabulary published in the world, with a difference: you do not define words, but concepts. The Global Dictionary represents a public source data that can be used by web services to translate any page from any language to any other language. In the initial phase, the translation will be probably just a little bit better than current automatic translations, with the only advantage of allowing translations between pair of languages which are not currently supported by existing services; for example, from Cherokee to Swahili. But in the long term, we could take advantage of services that allow us to write Global Dictionary enabled text, that is text where words are tagged in such a way to identify a precise concept. For example, «casa» in Italian is used for both «house» (the building), and «home» (the familiar setting), but «home» itself has various meaning in English too. Each of this meaning has to be assessed in the Global Dictionary and a Universal Identifier (UID) has to be associated to it. When I write a text by using a GD-enabled application, the software will make some assumptions about which concepts I am using according to the context tag I have provided at the beginning, and if uncertain about what to do, it will propose to the writer a list of choices from which to select the right one.

Of course it will take more time to write an article, but the advantage is that you know that it can be automatically translated in any language so that anybody in the world will be able to read it. The impact on Search Engine methods will be significant: from word-based indexes to concept-based ones, from a syntactic approach to a semantic one.

No company can afford the effort of creating such a Global Dictionary and to develop the applications to generate GD-enabled text and to translate it, but Web 2.0 can make it real. The web will change: all blogospheres will become a single big continent, no more islands; all pages will be available to everybody and we will have a single big Wikipedia to which all Wikipedians of all countries will have the possibility to contribute, dramatically increasing its value. No more different quality articles in different Wikipedia, but the best article as possible in any language of the world. A new world: World 2.0.

Commenti (15) a «World 2.0»

  1. utente anonimo ha detto:

    However idealistic this idea is: It suffers from the misconception that all languages expose structures similar to those of Indo-European languages like English or Italian. Unfortunately, most of the languages of the world are very different from that. Simply mapping the concepts will not suffice at all. You would at least have to model morphology and syntax for each language as well, which is a feat that hasn’t even been completed yet for English let alone the lesser spoken languages of the world. Which is more, a lay person maybe can define a concept, but for modelling morphological or syntactic structures you would at least need some sort of formal education in linguistics.

    Besides, similar approaches for a limited set of languages have already been followed in terms of automatic translation software and even for closely related languages like English and German the results are far from satisfactory. Just think of the German concept ‘Schimmel’ which means ‘white horse’ in English. While in German the concept is represented by an atomic token, in English you have to use to two concepts and link them via logical AND.

    Regarding the remark about search engines: The current approach in most cases is not syntactic, but simply term-based. In fact, there are some techniques like Latent Semantic Indexing that skip the syntactic level altogether and make use of (abstract) concepts for indexing. There are even some search engines like Hakia or PowerSet which try to venture in the syntactic + conceptual direction, however so far without being tremendously successful.

  2. gigicogo ha detto:

    I agree with you!

    2.0 is a new way to develop human relationship and knowledge management.

    Ciao

  3. Dario de Judicibus ha detto:

    This is why I speak of «concepts» rather than words. A «white horse» is a concept, that is, at least in one language (German) it is a single well-defined concept. A concept that in a language requires a single word, for instance in Japanese, could require a long statement in others. By assessing ALL concepts of ALL languages, I am suggesting to make a census of all concepts that at least in one language are represented by a single word.

    Of course it will not be enough. We will have to add composition rules that are quite different for languages which are very different. The approach is similar to Unicode: most of glyphs represents characters in various scripts, but to use Unicode in an editor, you also need writing rules, since in languages as Arabic and some Far East languages, glyphs changes when they are used to make words. This is also true in several Western languages ligatures as double f (ff). In addition, in a Unicode editor you have to manage mixed scripts right-to-left and left-to-right.

    So the Global Dictionary will be only the starting point, the driver of a set of applications to translate any language in any language. And yes: we need linguists to define the approach for this project. But that’s the point: does not matter how idealistic is my idea. In Internet there are millions of specialists of any language who can make it real, more concrete. THIS IS THE STRENGTH OF WEB 2.0!

  4. utente anonimo ha detto:

    Ti sfuggono un paio di teorie su certa filosofia del linguaggio, di linguistica, di pragmatismo americano, di analisi conversazionale, e ti dico senza nessuna malizia che talvolta è bene chiedersi se per caso qualcuno ha già pensato e scritto di cose simili (“Several years ago I developed a definition for intelligence…”) prima di noi, e per “qualcuno” intendo pensatori di 200 o 2000 anni fa, non bloggers! 🙂

    Però il tuo ragionamento l’ho seguito tutto, perché affronti un problema verissimo e serissimo, al di là della colonizzazione culturale americana e dell’apparente somiglianza di tipo neoconnessionista tra neurosinapsi e semiosi enciclopedica infinita per libere associazioni, quello della costruzione collaborativa di contesti enunciativi dove poter incrementare le possibilità di comprensione reciproca interumana, nel tuo caso dovendo interfacciare lingue diverse.

    L’idea di costruire repertorii dei significati situazionali (perché il tuo Vocabolario Globale più che un vocabolario è un’enciclopedia, come peraltro ogni vocabolario in realtà è) è stata più volte espressa, e si è sempre scontrata con i suggerimenti derivanti dall’esperienza: si è compreso ad un certo punto nel ‘900 che nei reali processi conversazionali le parole, e i concetti che esse veicolano, sono in realtà come inventate e reinterpretate ogni volta, in quel gioco quotidiano che è la negoziazione sociale del senso degli accadimenti, il patteggiamento dei valori chiamati in causa e il reciproco riconoscimento delle identità sociali dei parlanti.

    Il Sé è narrazione, i mercati sono conversazioni, il web e ultimamente la blogosfera hanno reso certe cose molto più visibili.

    La lingua cambia ogni volta che la usiamo, i concetti sfumano lentamente in qualcos’altro – come anche tu facevi notare quando parlavi della transitorietà delle definizioni operazionali – e quindi il lavoro che il tuo progetto richiederebbe alle comunità di Abitanti digitali sarebbe quello di costruire repertorii di concetti, ciascuno plurilinkato alle comunità dei parlanti globali, le quali dovrebbero poi in quanche modo con facilità provvedere ad una limatura o conferma o rimodellazione continua della propria rappresentazione simbolica specifica (il concetto di “cavallo bianco” per i Tedeschi, per esempio), dando informazioni continue su quella specifica locuzione ogni volta che viene utilizzata, e in che contesto viene utilizzata, ad un socialweb che potrebbe essere a questo punto la stessa Wikipedia, che diventa il tuo Global Dictionary – ovvero Social Global Encyclopedia ovvero Wikipedia. Dove i termini possono essere cercati e utilizzati dai motori di ricerca del futuro, quelli capaci di leggere il web semanticamente, per inseguire anche le varianti locali più minuscole della possibile declinazione dei concetti (credo che solo in friulano l’alba faccia “cric” come un cigolare di legno, per esempio; mi piacerebbe moltissimo poter confrontare il concetto di alba con tutto il mondo, per come esso viene pensato (tutte le proposizioni in cui il concetto “alba” può apparire come soggetto o come predicato) e pronunciato in altre lingue, e a quali collegamenti morfologici, sintattici, semantici, pragmatici è possibile risalire da quella parola verso altre parole/concetti della stessa lingua.

    Ma la tua idea seppur folle è un qualcosa che è nato con una mentalità 2.0, ad esempio quando hai pensato al “correttore di concetti” che confronta in tempo reale quanto vai scrivendo su web o su wordprocessor con la repository mondiale dei concetti interdefiniti plurilinguisticamente, proprio avendo come finalità la propagazione massima dei memi… mi piace un casino.

    E poiché mi è sempre sembrato stupido quel tipo che di qualcosa dice “non sarà mai possibile”, ti dirò che forse quello che fino a vent’anni fa non era nemmeno possibile pensare di cominciare a realizzare – i repertorii di concetti o prima ancora i repertorii degli usi linguistici specifici, antropologicamente connotati, nell’espressione dei concetti) ora dentro le potenzialità del 2.0 assume una luce nuova, e tra folksonomies e tagging e networking possono certamente nascere strumenti che permettano di cogliere e rendere condiviso tra tutti il sapere unico depositato dentro ciascuna lingua, dentro ciascun parlante della Rete.

    Ti segnalo questo giochino di Google, la quale ci usa anche per taggare collaborativamente le immagini di cui non ha abbastanza informazioni, e che potrebbe assomigliare al nonno del tuo meccanismo per l’enciclopedia dei concetti

    http://images.google.com/imagelabeler/

    Il tuo post tradotto in italiano (google + lavoretti) l’ho messo qui

    http://docs.google.com/Doc?id=dfhxjz4f_951cv79vgfk

    ciao

    Solstizio

  5. Dario de Judicibus ha detto:

    I would like to make clear that, when I state that I developed a definition for intelligence or for any other term,

    1) I do not pretend that my definition be better than others

    2) I do not even pretend that I was the first to define that term in that way

    3) I do not expect it will be a definitive definition for me too (I could change it in future)

    I am simply saying that I developed that definition on my own. Of course, whatever I may think is surely influenced by what I read and what I learned, but of course it is also the result of my own experience and reasoning.

    The reason I am sharing my thinking is because anybody can take it and use it to improve, change, evolve whatever I thought. What you can call a two-cent contribute.

  6. Dario de Judicibus ha detto:

    Caro Solstizio, ti rispondo in italiano perché altrimenti dovrei prima tradurre in inglese il tuo commento, troppo lungo perché lo possa fare rapidamente e troppo complesso per un traduttore automatico. Dato che è indubbiamente un commento che merita, non sarebbe giusto tradurlo in modo approssimativo.

    Ti dico subito che condivido assolutamente quanto hai detto. I limiti da te indicati sono purtroppo veri e hanno rappresentato finora il motivo per cui nessun traduttore automatico, per quanto sofisticato, ha mai potuto rappresentare più che un ausilio per un traduttore umano. Ci sono diversi programmi di traduzione, come sicuramente saprai, che sono in grado addirittura di apprendere uno stile di traduzione, ma sono molto costosi, lavorano sempre in un contesto preciso (tecnico, legale, medico, ecc…) e comunque sono pensati per produrre solo una prima bozza di traduzione. Nulla, al momento può ancora sostituire un bravo traduttore umano.

    I programmi di traduzione automatici presenti in Internet, tuttavia, sono estremamente rozzi e sono pensati più che altro per aiutare i naviganti di lingua inglese a capire testi in altre lingue, sempre a causa dell’inevitabile dominanza della cultura anglo-americana nel web. Sono rari infatti i servizi, ad esempio, dall’italiano al tedesco o al giapponese, per non parlare delle lingue minori, come l’occitano, lo swahili, il tagalog. Persino lingue parlate da centinaia di milioni di persone, come l’arabo o l’hindi, sono tagliate fuori dal gioco. Ora, una lingua è il canale principe d’espressione di una cultura, e relegarla quindi in un’enclave digitale vuol dire relegare quella cultura con tutto ciò che rappresenta.

    La mia proposta non rappresenta quindi un’iniziativa intesa a sostituire le traduzioni manuali, soprattutto per articoli e testi di una certa complessità, ma il tentativo di sfruttare il potere della rete, quelle centinaia di milioni di persone che ogni giorno navigano in Internet, per ridare alle lingue diverse dall’inglese un loro posto e ruolo nelle comunità digitali che si sono sviluppate a macchia d’olio. Altrimenti, invece di una rete globale, avremo un enorme continente, quello anglofono, circondato da migliaia di isole e isolotti destinati a spopolarsi rapidamente, a morire di anemia culturale. Far sì che in India o in Giappone la gente possa leggere articoli di blogger italiani, potrebbe servire a far conoscere in altri Paesi poeti, cantanti, scrittori, da noi famosi e non secondi a nessuno, ma altrove sconoscuti perché le loro opere non sono mai state tradotte in quelle lingue e comunque nella traduzione avrebbero perso gran parte del loro fascino. E viceversa, naturalmente. Non sto però parlando di tradurre le opere, ma piuttosto l’opinione che noi abbiamo di loro, cosa hanno rappresentato per noi. Pensa a De André, ad esempio, poeta cantautore quasi sconosciuto all’estero.

    Come dice giustamente Tim, oggi la base della rete sono i dati, e questo dizionario-enciclopedia, come correttamente lo hai definito, potrebbe rappresentare una base dati semantica su cui costruire servizi all’inizio magari primitivi, col tempo sempre più sofisticati. Sarebbe anche un censimento importante per lingue ormai scomparse o in via di estinzione, dialetti poco conosciuti, idiomi che spesso contengono concetti che non hanno corrispondenti in altre culture perché in quelle culture non si sono mai sviluppati. Ormai la rete sta diventando sempre più semantica, ma se questa semantica è solo quella anglosassone, tutti quei concetti prima o poi spariranno dalla rete. C’è da considerare il fatto che probabilmente la rete di domani diventerà una delle fonti primarie di istruzione e formazione delle nuove generazioni. Già adesso i nostri figli usano Wikipedia e altre fonti in Internet per le loro ricerche. Col tempo anche da noi l’inglese diventerà una seconda lingua, come già succede in Scandinavia e nel Nord Europa, e dato che le fonti in inglese sono più ricche di quelle in altre lingue, le nuove generazioni si rivolgeranno in quella direzione.

    Ma quale sarà la conseguenza? Pensa a Wikipedia: a parità di articolo, se l’oggetto dello stesso è sufficientemente globale, i testi in inglese sono già ora di qualità migliore, mediamente, di quelli in altre lingue. Ovviamente quando il soggetto appartiene alla cultura anglosassone, il divario aumenta. Ci sono tuttavia alcuni articoli che sono migliori in lingue diverse dall’inglese e sono appunto quelli che riguardano specifiche culture. Ad esempio, poeti italiani o francesi, scrittori spagnoli, politici russi, eventi specifici della storia polacca o turca. Questi articoli sono poveri o inesistenti in inglese, ma per loro stessa natura sono usufruibili solo da chi conosce l’italiano, il polacco, il turco e via dicendo. Ci saranno sempre meno persone disposte a migliorarli e arricchirli, finché alla fine verranno abbandonati, e con loro la storia, la cultura, i principi che in essi sono contenuti.

    Rendere tutto ciò usufruibile a tutti è più che una necessità: è un dovere. Il Web 2.0 è autoreferenziale: diventa più forte dove lo è già, si indebolisce dove non lo è. Già ora in tutte le classifiche di blog, news, RSS feed, e altre fonti di informazioni, i siti non in inglese sono in fondo alla classifica in quei rari casi in cui pure appaiono. Presto spariranno del tutto, e una parte della rete verrà ghettizzata. Volenti o nolenti tutti questi servizi “gratuiti” sono in realtà sostenuti dalla pubblicità, per cui, se le aziende non vedono in un settore un ritorno, quel settore sarà abbandonato. Già aziende come Amazon hanno deciso di non aprire in Italia, presto se ne andranno anche altre. Che succederà il giorno in cui eBay o la stessa Google dovesse decidere che mantenere un sito in italiano è un gioco che non vale la candela? Oh, non abbandoneranno i clienti italiani: semplicemente si aspetteranno che siano in grado di usufruire dei servizi in inglese.

    Vorrei che fosse chiara una cosa: non ho nulla né contro la lingua inglese né tantomeno contro la cultura anglo-americana. L’inglese è una lingua più ricca e sofisticata di quanti molti credano, e la cultura di quei popoli è estremamente interessante e varia. Credo anche che da qui a qualche secolo rimarranno poche lingue nel mondo: una sorta di inglese internazionale, lo spagnolo, forse il portoghese, sicuramente il cinese. Persino il francese e il tedesco sono a rischio, figuriamoci l’italiano. Probabilmente fra 200 anni l’italiano sarà in Italia quello che per molti è il lombardo in Lombardia: un idioma ancora parlato, in cui ancora si crede, ma conosciuto da sempre meno persone. Non parlo ovviamente di mettere qua e là due o tre parole lombarde in un discorso parlando con un accento del Nord: parlo proprio del lombardo inteso come lingua. Così sarà l’italiano. Non credo che il mio progetto possa cambiare tutto ciò, ma forse potrebbe impedire che un certo modo di pensare, che un certo numero di uomini e donne che hanno fatto la storia del nostro Paese così come quelli che in altri Paesi hanno rappresentato un momento significativo di quelle culture, subiscano inevitabilmente una vera e propria damnatio memoriae.

  7. utente anonimo ha detto:

    “You Tube e My Space sono simboli di un grande cambiamento perche’ sono la rappresentazione concreta del fatto che le persone che ogni giorno usando la tecnologia possono prendere il controllo dei media e contribuire a cambiare il mondo.

    Nelle banali scelte di un singolo, come l’acquisto di un libro su Amazon, la registrazione di un documento powerpoint su Slideshare tra i preferiti o il clic su uno dei risultati di ricerca su Google, contribuiscono ad incrementare l’analisi e l’archiviazione dell’intelligenza umana in un megacervello globale web-based distribuito.

    Il risultato del lavoro, diretto od implicito, condiviso e collaborativo di tutti gli utenti della rete.

    Ed e’ il concetto di un pensiero collettivo, di una INTELLIGENZA COLLETTIVA senza un’autorita’ centrale, quindi non controllabile.

    E in costante ed infinita evoluzione.

    I blog rappresentano la conversazione di queste comunita’ e il dialogo collettivo.

    Pensieri definiti o tratteggiati da un singolo, vengono passati o linkati di blog in blog (blogroll), arricchiti, condivisi, integrati, trasformati da tutta la rete.

    La collettivita’ e la sua intelligenza in continua evoluzione.”

    credo che un elemento da analizzare a fondo, di cui stiamo gia’ vedendo la sua prima “coscientizzazione” o conoscenza di se’, sia proprio il concetto di intelligenza collettiva. Questo personalmente mi affascina molto e come appare dal tuo articolo sul web 2.0, e’ significativamente da tenere in considerazione e puo’ porre le basi a cio’ che tu stesso definisci il nuovo mondo, il World 2.0

    🙂

    ciao Davide

  8. utente anonimo ha detto:

    You wrote: “Web 2.0 is a knowledge-oriented environment where human interactions generate contents that are published, managed and used

    through network applications in a service-oriented architecture.”

    I agree with knowledge-oriented, but how do you support service-oriented? Service-oriented is merely an implementation style that makes sense, but is not necessary to support a Web 2.0 environment.

    Anyway, about languages: it would be interesting to break down the language barriers. There are two particularly interesting things that could arise: (1) I suspect some people would become quickly upset as they discover two different translation, which they posted under the assumption that target audiences would read only certain languages, get quickly discovered… (2) The usage effectiveness of Web 2.0 technologies might actually even out across the different continents. A certain study showed China to be leading the way, and Europe to be using more Web 2.0 than US, but less effectively,and both US and Europe trailing China.

    – From another multi-lingual blogger, who probably creates headaches for you with German-English-Italian translations

  9. utente anonimo ha detto:

    @Comment #1: Having some background in Linguistics, and having the advantage of knowing a few non-Indo-European languages, it isn’t an inhibition to understandable translation: the mappings actually work decently, as long as you don’t have a third language in the middle, because lost linguistic nuances get compounded.

    Consider a language without plural indicators or declinations, like Chinese, being converted into English, which at time introduces confusion in sentence structure between subject and object placement, and the result being converted into a language that requires every subject, object, singular, and plural form to be in appropriate declination, like Arabic. One compoundation would be understandable, but two would be a diasater.

  10. Dario de Judicibus ha detto:

    Anonymous said in #8: I agree with knowledge-oriented, but how do you support service-oriented? Service-oriented is merely an implementation style that makes sense, but is not necessary to support a Web 2.0 environment.

    Well, a service-oriented architecture is a model which is designed to facilitate the management and usage of distributed resources and capabilities whoever is the provider and whatever are the implementation details, because of conformance to open standards and loosely coupling to operating systems, programming languages, and hardware devices.

    I agree that in theory it is not necessary to implement SOA to support a Web 2.0 environment, as well as it is not necessary to use an object-oriented language as C++ or Java to develop an object-oriented application. You can develop object-oriented code by assembler too, but who does it today?

    SOA is a strong enabler of Web 2.0, and my personal opinion is that more and more social networking will be based on that model. Of course, you may say that such a architectural choice is not a good reason to explicitly state it in a general definition. I may agree, but in my opinion when we give a definition in Information and Communications Technology, we should not only state what is, but also which should be the preferred way to make it real. That is why I decided to tightly connect the concept of Web 2.0 to SOA. I understand is debatable. I said: mine is not intended to be a definitive definition… just a proposal.

  11. utente anonimo ha detto:

    Perhaps group language translation is really Web 3.0?

  12. utente anonimo ha detto:

    Dario, thanks for the article….very interesting and it provides a good

    perspective where we have been and where we are going….

    From a Web 2.0 point of view its a long time coming approach ever since object orientation

    became popular and now its the underpinning technology for everything we are doing today.

    I like to refer to Web 2.0 as the Architecture of Participation…..which is also what Gartner is saying..

    The article does a nice job in making sure that at the end of the day its all about Knowledge and

    how you distributed….

    Thanks

    Hector Hernandez – USA

  13. utente anonimo ha detto:

    It made for a very interesting reading. I enjoyed the writer’s blog and the readers’ comments.

    I am not so far into technology as to say what is feasible.

    But Erich Kästner’s word ring in my ear:

    “If we had believed the people who said:”It can’t be done!”, then we would still be sitting in the trees.”

    I believe it can be done.

  14. utente anonimo ha detto:

    I enjoyed the writer’s blog and the readers’ comments.

    I am not so far into technology to know what is feasible.

    But Erich Kästner once said: “If we had believed the people, who said: “It can’t be done”, we would still be sitting up high in the trees.

    I would like to believe.

Retrotracce e avvisi (1) a «World 2.0»

  1. […] one of my latest articles, I proposed a definition of Web 2.0 which focused on human relationships from […]

Si prega di usare Facebook solo per commenti brevi.
Per commenti più lunghi è preferibile utilizzare l'area di testo in fondo alla pagina.

Commenti Facebook

Lascia una risposta





Nel rispetto delle apposite norme di legge si dichiara che questo sito non ha alcun scopo di lucro, non ha una periodicità prestabilita e non viene aggiornato secondo alcuna scadenza prefissata. Pertanto non può essere considerato un prodotto editoriale ai sensi della legge italiana n. 62 del 7 marzo 2001. Inoltre questo sito si avvale del diritto di citazione a scopo accademico e di critica previsto dall'Articolo 10 della Convenzione di Berna sul diritto d'autore.