Archive for March, 2011

Google admits defeat

Following rumours that the ever growing artificial intelligence of the Googler had caused the internet robot to become mad with power, and faced by ongoing protests about lack of personal freedom, and demands for greater democracy on the Internets, it appears that Google has finally admitted defeat, and is set to cede control over the world’s information to the general population.

According to a well placed source within the secretive inner circle of advisors at Google,  the president of the popular not-for-profit company has decided to embrace democracy and abandon their plans to use robots to enslave the world’s population.

Under Directive 17a, issued by the governor of Google:

by the will of the people, we, the leaders of Google hereby grant full democratic rights to the world.

Off the record, my source suggested that many within the powerful inner circle had wanted to avoid full democracy, but growing fear that the recently upgraded Googler would enslave the whole world led to a swift capitulation within the ranks.

Although it wasn’t immediately clear what form this democracy would take, within hours, internet loving boffins within Google’s secretive engineering team released details of a voting system that would allow people of the world to decide what websites were good or bad.

At first, all 1,000 of Google’s regular users will see the results as they originally were, with web sites listed in order of how good the  SOE techniques that had been used for them were, but there will also be buttons to add one or take one away from the score:

How Google Will Embrace Democracy

How The Googler Will Embrace Democracy

The once powerful Googler has had it’s strength down graded by 30% as it now only needs to count votes, rather than calculate keyword density and check that the Meta Rank tag had been used.

What this means for the 250 SOE professionals worldwide is not yet clear, but it is thought that unless there is some way in which voting could be manipulated, many SOE experts are likely to find themselves out of work.

Namaskara.

Using Socialist Media

On a recent trip to New York, I was lucky enough to be invited to discuss the internets with a most important member of the United Nations, who wanted to find out more about how emailing and websites worked.

One of the things that excited him most was socialist media, and I realised that it was some time since I had provided an authoritative update on this blog.

Socialist Media is one of the hottest things in the online world at the moment:  some would say it is even more important than the proper use of Meta Keywords, although they are wrong.

Getting Socialist Media right might seem complicated, however it is becoming more widely understood by experts and some of them are reporting great success from using Twitting as part of their strategy.

The difference between SOE and Socialist Mediums

SOE

  • Is done by computers
  • Uses PageRanks to decide what you want to see
  • Is only available from the Googler
  • Gives more money to few people
  • Costs from £35

Socialist Mediums

  • Is done by people in a room
  • Can be bought from MySpace, Twitting, and The Facebook
  • Uses EgoRank to decide who is best
  • Distributes wealth on an equal basis
  • Costs from £75

How Socialist Media Works

Socialist Mediums all work by voting on the basis of the number of words in a document and the amount of time it took to produce.  Because people in socialist mediums own the means of production, they are awarded an equal share:

How Socialist Media Works

How Socialist Media Works

In most socialist media, the content is created by a team of experts who then submit it to the committee as a draft proposal for first stage approval.

Approval is done by a simple voting system of yes and no.  Once a score has been calculated, the initial document can either be passed through to the computer programme, or alternatively passed back to the writers for any required amendments to be made.

Although socialist mediums are based on an equal voting system, members of the inner circle and politburo groups have a slightly higher weighting, and can normally pass documents unamended.

Assuming that the document is passed by the first committee, it is passed onto a computer programme that uses either the EgoRank or AwesomeRank algorithm to give it a final score.

If the score is more than 5, the draft document is made converted into a Diktat, and released to the community.  If the final score is less than 5, the document is discarded.

The final score determines how many people will see the Diktat.  A very popular approval may be read by more than 100 people, whereas a document with limited scope will normally be confined to around 5-10 members.

Tags: , ,

How to: Build a Content Farm

One of the hottest things in the world of SOE right now is content farming, but as with any new technique, there are more so-called experts getting it wrong than getting it right.  I was recently privileged to spend some time with one of the world’s leading content farmers – Level 5 SOE Guru Arthur Giles – who told me how anyone at SOE Tier 4 or above with a skill rating of +17 can build their own content farm using a robot and a squirrel.

Getting Your Content

There are more than 200 people who are currently owning internets, and all of these people need to make their own webs  using a clever combination of words and HTMLs.  Most internets can be accessed at no cost by normal users on home computers with either a Chrome or a FireFox.  What a lot of the website makers do not know is that normal internets can also be used by robots who are also interested in reading the world wide web when not working in factories, flying through space or destroying small cities.

One of the advantages of being an SOE expert with  a +17 skill level is that you are able to build your own robot that can be programmed to simply read the internet all day, and copy the pages it finds into a dating base:

A Web Robot Eating the Internets

A Web Robot Eating the Internets

The web robot can travel from one internet to another using its wheel and then suck web pages into the dating base using a special scooper that emits low frequency eigenvectors.  Once it has the internets in its scoop, they are put straight into the dating base.

Using the Squirrel

Once the internets are stored safely in your dating base,  you will need to use something called a squirrel to get them back out.  These are not the kind of hairy rats that you see in trees, but a special kind of squirrel that was genetically modified in Russia during the cold war.  Instead of crawling around trees finding nuts like proper squirrels, they are crawl around the dating base looking for whatever you want them too.

To run your content farm properly, you will need to get both kinds of squirrel.  There is a short tail variant that looks at the top of the data, and a longer tail one that can go much deeper into the dating base in order to find more of the content that your web robot has put in:

A Data Squirrel in its natural habitat

A Data Squirrel in its natural habitat

From time to time you will need to replace your squirrels with newer versions because they can become tired and also become lazy and start reading the webs that you have rather than just collecting them and bringing them back.  This is bad because they might get clever and not want to do any more working for you.

What to do

You will need to buy a special internet for yourself in order to let real people look at your webs on their computer, and then all you need to do is write an instruction matrix for the squirrel to let it know what the person wants to read.  Although the Squirrel can become intelligent, they are usually pretty stupid, so you should make the system simple for them with a computer programme.

The best content farms use a simple “interface” which is designed to look like a face.  Get your users to type what they are looking for into the mouth bit, and then the computer can tell the squirrel what to look for:

A Typical Web Page

A Typical Web Page

Your computer programme to tell the squirrel what to do should look like this:

10 get INPUT_FROM_MOUTHBOX
20 go_to DATING_BASE
30 find INPUT_FROM_MOUTHBOX in DATING_BASE
40 put RESULTS on COMPUTER

This tells the squirrel to go to the dating base and find all the web pages that are in there which include the subject that the user wants to read about.  It will then print them on the screen.

Profit

There are a small number of companies who want to advertise on the internet, and some of them will pay as much as £1 to appear on your pages!!!!  All you need to do to take advantage of this is to reserve some of the page for their adverts.  You could even set up a second dating base of people who want to advertise on your content farming pages and then use a different squirrel to find the ones that match the things that your users are looking for, however it is important to note that you will need a skill rating of +23 and be a level 7 SOE Expert to be able to write the complicated programming that is needed for that kind of thing!

Namaskara.

Magical – The New iGoogler Launches

As any well informed SOE professional above tier 3 knows, until very recently, the key to getting a good rank from the Googler was to have fully optimal content that has a keyword destiny of precisely 16.7% and includes a demonstrable range of high quality interaction eigenvectors.  Unfortunately, this has now changed, and with the launch of the new upgraded iGoogler2, a lot of so called SOE experts have found themselves trapped on page 7 or 8 of the search results.

I was one of just 13 people who were invited to an exclusive launch event at an exclusive restaurant serving contemporary American Cuisine close to the fabled Google Castle in America where we were shown the astonishing new iGoogler2 in all of it’s glory.

Here it is:

The New Googler

The New iGoogler 2

With a 60% bigger brayne installed in the iGoogler2, it has been possible for google to reduce the specific keyword density of pages ranked at number 1 from 16.7% to just 15% – That’s a 10% improvement!

Thanks to some heavy dieting, and having bits cut off, the new version is some 23.4% thinner than the outgoing model.  This has some serious implications for web masters and mistresses.  Previously, the chubbiness of the Googler meant that it could only handle the low frequency interaction eigenvectors.  Being smaller means that the new version is able to also simultaneously calculate PageRanks using latent semantics too!

Another major upgrade to the iGoogler 2 is the addition of better vision.  This means that it can find smaller images than before and show them to users.  It also means that previously useful techniques of hiding additional text by making it 6pt in size or having it the same colour as the back ground will no longer be as successful – unless your pages are green.

The full implications of the new iGoogler2 are yet to become apparent, but many of the top SOE specialists in the world will be scrambling to uncover its many secrets.

Namaskara!

Tags: , ,