Archive for November, 2007

OpenLDAP syncronisation Challenge.

November 29th, 2007

Through faults of my own, the recent implementation of an LDAP server did not fully fulfill the functional requirements that I had drafted.  The main issue was with disconnected operation – something I was only made aware of after Gavin Henry explained the exact slurpd mechanism to me.

Thanks also to Suretec’s Blog I noticed that slurpd is deprecated in the 2.4 release of OpenLDAP.  Since this is a new installation at 2.3.38-9 it seemed sensible to remove anything that is going to be deprecated within the first few months of “Going Live”.  Therefore I was on the lookout for the new procedure.

The way this would be done uses something called syncrepl. This is specified only in the Slave slapd.conf. The man pages full explain the features – but to my understanding this feature can be configured to either pull information from a master ldap server – and store it as read only – or to get the master ldap server to push any updates to the slaves – however the slave is still read only.

If you want to make modifications from clients to the slave URI (if the clients cannot directly access the master) – one has to use the “chain overlay.”   This overlay forwards LDAP modifications from the slave to the master – in what I can only imagine to be a mechanism similar to “port forwarding” on a router.*[1]

So back to my setup.

I have two three-server clusters in two remote locations.  Call these cluster 1 and cluster 2.

Cluster 1 is the main cluster, where clients will be regularly updating their passwords and where most of the server activity will take place (file uploads .etc).  Therefore, this cluster has the master openldap server.

Cluster 2 is a backup cluster, in case anything should happen which would bring down Cluster 1 (such as a power failure at the remote location).  Instead of going down, the services would failover to Cluster 2.  This is where the slave LDAP server is located – as it doesn’t see as much activity.

However, in order to centralise the company’s user database, the slave LDAP server also acts as the authentication server for all the users “on site” at the company.  The local intranet authenticates of it – and some users change their passwords by logging onto this machine.

The “chain overlay” seems to be the most sensible option, and coupled with syncrepl this could solve 95% of what needs to be done.  However, in disconnected operation the LDAP server would become read only – which would mean some attributes would not be updated.  Ideally the slave server should log all changes made that couldn’t be forwarded to the master server – apply them and update the master server when a connection is restored.

The other thing that adds to the complication is the configuration.  Both of the LDAP servers sit behind a Firewall machine – therefore the network diagram is such:

LDAP(SLAVE)—-FW(CLUSTER2)—-INTERNET—FW(CLUSTER1)—-LDAP(MASTER)

The only two machines in each cluster that can talk to the other cluster are the FW machines – therefore I’ve set up some port forwarding to allow the LDAP machines to see each other by setting arbitrary ports on the FW machine.  FWx:389 (where x is the local cluster) forwards to the LDAP server on the opposite cluster.

It’s probably not the most complicated (or convoluted) setup you’ll ever see – but this does add another layer of failure.  Communication inside the cluster needs to be tightly controlled.

If you have any ideas on how best to implement this – please let me know and I’ll keep you updated with my progress.

[1] I may be wrong

Create a new Company

November 28th, 2007

Well I’ve had quite a busy few days in getting some more work done on a few websites for friends.  I don’t do anything commercially _yet_ – but one opportunity has reared its head and I will even be able to charge a fairly robust rate for the work – aswell as generating a monthly “support fee.”

This being the case, how difficult/sensible would it be for me to set up my own company to handle this business.  I guess were it to be formed I’d also want to put some other work through it too.

Answers on a postcard please. (or below in the comments).

The Importance of The Open Rights Group

November 23rd, 2007

http://www.openrightsgroup.org/

I only came upon this group whilst listening to an interview on BBC Backstage between Ashley Highfield and Mark Taylor (of the OSC). I’m afraid I forget the representative’s name from the ORG who was there, as well as a number of other guests. Rather than blabber on about the iPlayer again – I’ve been thinking a lot about individual freedom and responsibility.

A couple of weeks back I was listening to a Radio 4 program on the subject of “When do Children become Morally Responsible?” It was quite a shocking programme, as one of the core “yardsticks” used for and against the argument was the James Bulger case. However, the main thing to come out of the programme were that the “psychology experts” and the social services were arguing that the age of Moral Responsibility should be raised to 16, or even 18 – anyone under that age is not morally responsible. With that in place, a murderer at 17 would be assumed not mature enough to have realised that what they were doing was wrong, so therefore was not a murderer. I know that by taking rules to their extreme, you’re bound to find “shock hypothetical situations” which do not follow the spirit of the law, but surely we can agree that the far majority of (and probably the entirety of sound-mided) 17 years olds can judge the moral implications of murder.

I don’t want to prioritise what I am about to say above the question of moral responsibility – but at what point should computer users be responsible for the software on their computer? Some people would say that if the user does not agree with the End User License Agreement (EULA), then they should not install the software. However, EULAs are often very long – and in many cases seem irrelevant to an end user. Imagine sitting down to your freshly bought PC – along with a suites of applications and games – and realising you don’t agree with the EULA. Taking it all back to the shop doesn’t appear to be a viable alternative, so I’m guessing 99.9% of the time you’ll ignore it.

This is assuming that the software is bought and legal at all – “Re-installing Norton Internet Security 2001 for the 6th time to take advantage of the free 12 months of updates you got when you bought it for £30 back in 2001″ as an example of “breaking the rules.” The EULA now becomes irrelevant – you’re operating the software illegally. “SO WHAT?” – it’s not like there are any computer police that come round and check your disks for illegally downloaded software. Well no.. not specifically – but with the birth of the internet.. they don’t need to come over to see what software you’re running.

It had puzzled me as a teen why Microsoft couldn’t tell who was illegally downloading updates to an “already registered” version of Windows and put a stop to it. Well, in 2006 they realised and started doing it.. but few people have really got into any trouble for it… more often than not they’ve just found another way to circumvent Microsoft’s checks and continue as normal.

However, software licenses where software is concerned appear irrelevant to the user -but what about “information licensing?”

Information Licensing

In the past, this was more a question for academics and indistry. If they has ideas which needed to be protected, yet shared, then it would have to be licensed. Patents are one example, copyright, trademarks, the (c) sign and the (r) sign – they’re all examples of people protecting what’s theirs. Do we no longer care about what we own – or are we ignorant?

I hate to say it but I think it’s ignorance that’s causing most of the issues around us today. I read not so long ago that Virgin Mobile ran an advert along the lines of “get a thumb friend, not a pen friend” on a bus shelter – with a picture of a geeky girl underneath. This picture of the geeky girl was legally used from the website Flickr – where as one of the condition of joining, you agree to publish your photos under a specific Creative Commons License. The fact that this girl was rather upset being called a geek (especially all over international media since she made a fuss) was not the fault of Creative Commons, or Flickr – but the girl herself for not reading the EULA. Do you think she understood that? That’s funny, her team of lawyers didn’t either. Lucky for her, the fuzzy and well meaning Richard Branson had the pictures removed from circulation. [1]

The same is true of Facebook. Well not really.. but similarly. Can you delete a facebook account? (no) – but you can “de-activate it until you wish to return.” Delete it.. no. Can you delete a photos of facebooks servers? (no) – but you can remove all published links to that photo – so only those who know that actual photo URL can access it. Go on = have a go :)

We’re being careless with our information. Everything I type on this blog is being cached by Google. They have more information on me than I have in my filing cabinet. They probably know me better than my mother knows me. They know a unprecedented amount. No one company before the digital age could have ever expected to know what Google knows. It’s why their advertising is so ubiquitous on the web. The more they know, the more they can target you, the more you’re worth to advertisers. What you don’t want to do it give yourself to them. If there’s an embarrassing video/photo of you on the internet – you want to get rid of it. If you wrote something in anger, and was to remove it.. you want it REMOVED. No way – not any more. Wake up and smell the coffee.

Where the Open Rights Group come in.

The Open Rights Group campaign to make sure your information is used the way you want it to be. Your digital rights need to be upheld by someone. If you bought a CD in the past, you could play it in whatever walkman you liked. Buy a tune from itunes, and you have to use an iPod. Is that right?

The Open Rights Group exists to do 5 things:





  • To raise awareness in the media of digital rights abuses

  • To provide a media clearinghouse, connecting journalists with experts and activists

  • To preserve and extend traditional civil liberties in the digital world

  • To collaborate with other digital rights and related organisations

  • To nurture a community of campaigning volunteers, from grassroots activists to technical and legal experts




In the move to digitisation “traditional civil liberties” are being eroded by a new-found ability to restrict or exploit users. The Open Rights Group aims to respect the rights of all parties, yet maintain the liberties to which we have become accustomed. Whilst this issue appears to be one for the technically competent or the nerds, geeks and hackers – it’s one for every person who uses a computer, or has information about them digitised. That means every person living legally in the UK.

I urge you to look into supporting the Open Rights Group not as a matter of charity, but as a matter of responsibility. As with many things, the novelty of new ability and technology deprecates the old. Moving databases, services and infrastructure onto new technology was a question of whether we “could,” and a failure to look at whether we “should.” Unfortunaltely, HMRC have recently provided us with a first-rate example of a system that promotes running before walking. Anyone with access to a confidential 25 million user database should not be able to copy that database onto CD – encrypted or not. Even that is a moot point when you consider the who thread of events. Why should the NAO even need the database in the first place. If my auditors asked to see the credit card records of my customers, a simple “No” would suffice. These are the things that need protecting – and that is the role of the ORG.

[1] ed. I hope I got the right company, but I may be wrong.. I didn’t want
to ruin the story though as it’s “eventfully accurate.”

England 2 -3 Croatia (or “The Truth is out there [but not on PES2008]”) ..conclusion

November 21st, 2007

Well thus ends England’s Euro 2008 campaign, and it’s not even 2008 yet.

It was a pretty dismal performance, but the team looked fragmented and detached. I’ve not been following the football that closely I must admit – but the level of communication and ‘oomph’ required really wasn’t.

I had a quick game of Pro Evolution Soccer 2008, which was recently released on the Xbox360. (See, I’m not that much of a MS bashing Linux Fanboi!). It’s quite a fun game, especially as I can claim the upper hand against my flatmate for most of the games we play. Well the truth of English Football is engrained in the stats. I don’t know if the game is biased when it’s shipped to Japan, China or the USA – but the Pentagon to show the team stats is thoroughly slanted when it comes to England. This image is of England versus Croatia.



As far as the managers job goes, I will be sad to see him leave. However, we need to learn lessons from his (short) tenure, as he learnt lessons from dear Sven. The most enjoyable time I’ve watch England were the couple of three-nils with Barry introduced into midfield. It really worked. I just hope the next England Manager is afforded the space by the media to do similar things – and I guess with no competitive matches until the World Cup – he’ll hopefully have more freedom to mix it up before it really matters.

Anyhow, I don’t want to rub salt into the wounds.. So Goodbye Euro 2008.. The next thing to look forwards to is World Cup 2010 – and I’ll be 24.. ouch.

England 0 – 2 Croatia (or how media madness ruined English Football).. first half report

November 21st, 2007

It’s a bit of a joke really.  First of having to rely on a team like Israel to even give England a chance of getting to the European Finals – and then to belittle a team with as much quality as Croatia – with the majority of pundits expecting England to beat them easily.

I was rather amused at being able to see the “gridiron” from the wonderful NFL match held at Wembley three weeks ago – but the best bit was this.

Due to the gridiron being smaller in width than a football pitch, the “touchline” for the gridiron was ten yards inside England’s right wing during the first half.  I now ask you to watch an NFL game, with all the circus standing on the sidelines.  We have “technical areas” – they have hoodlum asylums – everyone can stand their.. pacing up and down and RUINING THE FOOTBALL PITCH!

Now if this isn’t yet another example how the media (especially Mr Murdoch) are ruining the “beautiful game” then I don’t know what is.  That, and the fact that the media were belittling Croatia.  They’re a good side, and I think it’s going to be a very interesting second half.

PCI DSS* – where Open Source should have an advantage.

November 21st, 2007

PCI DSS = Payment Card Industry Data Security System.  

Over the past few weeks and months I’ve been helping to develop a PCI DSS System for a client.  It’s been quite a feat as there are quite a few integrity checks, tripwire monitors to set up and automate – as well as having the services that are running audited for secure protocols – and policies in place to make sure that any holes are patched and recorded properly.  It’s been quite a big learning experience for me – not only in the technological challenges, but the managerial “nuances” of passing an audit.

Standards

The first thing is that the PCI requirements are the same across all companies that handle/store credit card data.  There are 12 main requirements, each of them having sub-requirements which go into more specific detail.  Things like password policies and log retention periods are stated.  It’s hardly environmentally friendly either – as at least two (if not three) different physical servers are needed to fulfil the requirements – and for backup purposes that doubles if you’re going to have failover redundancy.

Security by Obscurity

One of the biggest pains I had with the PCI DSS implementation was that there wasn’t much guidance or howtos on how other people had secured their PCI systems.  Well it’s not too surprising really – if your securing a system you’re hardly going to want to publish details about how you’ve done it.  However, security by obscurity is as good as none when someone finally breaks the obscurity.

Together we prevail, divided we fall.

I would argue that this should be a motto of every open source group functioning.  It would save so much time and money if, for example, Red Hat were to provide a “PCI compliant” authentication server and webserver cluster.  Imagine setting up two servers and running:

rpm -ivh dbserver

rpm -ivh wwwserver


It’d save a whole lot of time and effort on the part of an individual systems administrator.

However, it doesn’t need a behemoth like Red Hat or someone to do this – it needs a few people working together to set up their own repository – and have some incentive for doing it.  It doesn’t have to even be the packages – it could just be documentation for now.  An anonymous library of PCI documentation could save administrators alot of time.

I’ve posted some of my howtos on the blog for the last couple of months related to OpenLDAP.  This is an integral part of any GNU/Linux PCI system, in my opinion, as monitoring user activity/authentication is central to passing the audit.  No shared passwords and well managed users must be stored in a single directory.  OpenLDAP is made for this purpose.

I’ll hopefully be releasing more documentation as time progresses.  If you have been through a PCI audit recently – and would also like to help out your fellow Open Source Administrators – don’t hesitate to post me some of your documentation.  I’ll set up a wiki if I start getting lots of it through.

Interface Design

November 21st, 2007

(This post was imported from another site.. it’s a few months old now)

My background in IT is not very conventional for a Linux Systems Administrator. Much of what I have learnt has been self-taught, with much time and space given to me by my employer. My ‘academic’ background is in ‘psychology and business.’ Unfortunately, I failed to get on the pure Psychology BA, therefore did a Joint Honours course in the above.

Having spent over a year now working in the FLOSS business world, there are a few things that I have noticed. In order to improve our own products, we must look to what the competitors are doing. Microsoft work by having tightly integrated products; their MS Outlook and Exchange Server are fantastic examples of this approach. Apple, on the other side, are much more focused on the Desktop and “play applications” such as their iTunes, and Quicktime products. Their focus is on the interface design of a computer, rather than attempting to imitate Microsoft. However, the DRM they use on their iPods is allowing iTunes to get quite a large market share off Windows Media Player.

Where can FLOSS improve?

FLOSS already has applications that are very advanced and stable, and the main issue that users will have is the transition from the 100% GUI (for the end-user) of Windows XP – and the steps Apple have taken to make the switch from MS to apple involve making the interface as easy to learn as possible. One of the things that really impressed me was their MIDI connection interface. You have the input on the left, the outputs on the right, and you drag and drop connection between devices to create loops – it’s really intuitive, and beats any MIDI interface I’ve ever used on windows (not to say there isn’t one – but apple’s is OS level). Now, I have recently started playing with ubuntustudio = and the FLOSS program JACK does exactly the same thing (it may even be the backend for the apple one it’s so similar) – however the interface does exactly the same, but you have to select the two devices then click connect. Whilst picking up on small things like this may seem quite pedantic – it’s a solution that programmers can understand and use right away – but to musicians, apple’s solution is much better.

Pet hates: -

I do have a couple of pet hates – my first is abuse of the term Web 2.0. To me, Web 2.0 is browser based applications. Should I need to be running a particular operating system, or particular browser to view the Web 2.0 software means, to me, that it’s not Web 2.0.

Why is this important to interface design?

Whilst such pet hates may not appear to have an obvious implication for interface design, the lack of open standards and complex proprietary code (FLASH) mean that the experience is very different for users of different operating systems. If you take the software this site is built on (Drupal,) and view it in IE6, IE7, firefox and safari – there are more often than not likely to be differences – due to the way the different browsers interpret the less well documented standards. The advantage of open standards (HTML being probably the most popular) mean that so much more information is open to anyone, despite their OS.

The advantage of developing in FLOSS means that most of it can be used on other systems. Providing a platform agnostic approach is taken from the start – there is a likelihood that the software will be able to work on other Operating Systems. Interface design needs to be a precursor to development, and not so much an afterthought. the main barrier to this is cost – many open source developers simply can’t pay to have an extra person on the team who is not directly contributing to code. Let’s try and use the research that has been done in this area to develop better user interfaces. Beryl, & the 3D Desktop may be nice, but we need to look at which components actively add to the user experience, and which are simply eye candy. There is a clear differentiation between the two forms of interface – let’s hope that FLOSS developers are able to harness the power of projects such as beryl and create a more streamlined – end user focused interface.

FLOSS into the Future.

November 13th, 2007

I have put off writing this blog for quite some time, as I don’t feel as though I could do it justice.  However, by keeping the thoughts in my head I’m not getting anywhere – so time to put them up on the internet for others to comment too.

Does FLOSS fit into a particular political camp?

The Free Software Foundation espouses some very strong and fundamental rules regarding free software.  The ‘GNU’ utilies that come shipped with the ‘Linux’ kernel are probably the most well known in the FLOSS world as being the basis on RMS’s Free Software movement.  Unfortunately I am rather ignorant to the majority of his work, and need to get time to read on the history of the FSF – so will leave that to another post.  However, let’s take the basics of the FSF message.



  •  The freedom to run the program, for any purpose (freedom 0).






  • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.



  • The freedom to redistribute copies so you can help your neighbor (freedom 2).



  • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.



Those are the fundamentals from the FSF WebSite.

Initial Thoughts

My first thoughts on Free Software were the massive difference that it could make to the social world.  The passage of information in digital form across borders is unprecedented.  The ability for people in America and Europe to work alongside people from all the other continents marks a paradigm shift in global relations and communications.

Business

One of the first things I did when I started to use GNU/Linux was to create an “Office Server.”  It processed email, had document storage, and had aRAID1 setup across two DATA hard disks.  Now, I am ignorant of the underlying technology and the kernel programming – I knew absolutely no programming – having not spent even a day programming at school/college.  I stopped being taught IT in school at Year 9 as the teachers were so far behind.  Whilst I was maintaining multi-table databases, they were teaching me how to manoeuvre a turtle 90* on a screen.  Without other peoples effort and contributions I could not have made such a server.

Edubuntu, Edubuntu, Edubuntu!

The next most astounding thing I found, was using the distribution edubuntu.  Since its  launch in 2004, Ubuntu has become the “golden child” of Linux Users.  Under the leadership of Mark Shuttleworth, Ubuntu has started to gain reputable market share of the Linux sector, and is even now starting to break into territory previously dominated by Microsoft.  Edubuntu is a distribution of Ubuntu which is aimed at providing a distribution which contains the main ubuntu desktop, but also a selection of educational tools.  I will not forget the first time I had this set up and my two little brother came and played on it.  Who knew the periodic table could be so much fun!

LTSP

That wasn’t what hit me though.  It was the LTSP capabilities put in by default.  The majority of PCs at my house were within 5 years old, and connected to the home network by Ethernet.  In a simple configuration change (setting them to boot from the network card, rather than their hard disk) I was able to convert my whole house into a massive classroom.  It’s not just the advancement of technology and IT geekery that the FSF provides – but access to new information.  Instead of paying £s per seat in order to have a Server-Client set-up at school (probably provided by RM) – a school could implement this edubuntu solution – and it installed straight off a CD.  Now sure, there are some maintenance tasks that would require a Linux technician – but the tools and resources are out there.

Environmentally Friendly

This is where I think the argument in FLOSS’s favour can sometimes get distorted – and rightly so because it’s a complex issue.  FLOSS applications can generally run on much older hardware than proprietary ones.  Whilst the capitalist model of the proprietary companies have been to use every resource that was available and force a user to upgrade, Free Software held back and although there is a lot of software that benefits from the new and faster machines –  there are many distributions that are tailored to get the maximum out of older hardware.  Because computers are so toxic – preventing their decommission and extending their life is seen as “environmentally friendly” – however, this needs to be balanced against the fact that electricity usage in older machines is far less efficient that the modern models.  GNU/Linux, however, can also run the modern computers at peak efficiency with Advanced Power Management.

Basics

People talk of the end of the Desktop – and that the Desktop doesn’t matter.  Some people even say that Desktop war is over – we’re in Web 2.0 therefore it doesn’t matter.  I’d say that it’s rather premature, given that some companies are making supposedly Web 2.0 applications that are linked not only to a particular company’s operating system – but to a particular version of it.  GNU/Linux has a massive part to play – however, in my view it’s the ‘networking’ that will prevail.  We’ve recently seen the growth of Facebook and MySpace – the two great Social Networking Giants.  Google OpenSocial is planning to level the playing field by offering a standardised platform for network programming.  In my opinion it will be these openly standardised networks (be that SIP, XMPP, OpenSocial) that will be the success story of the next ten years, rather than just one company.

Who will lead us?

In the UK, no single political party has taken the lead in the promotion/adoption of FLOSS.  George Osbourne has probably been one of the loudest and most high profile exponents of a move towards Free Software – but the general political machine has yet to change.  The much hyped e-Gif (Electronic Government Interoperability Framework) is about as useful as a chocolate teapot in providing a framework for public development.  A published an open specification would allow some kind-hearted FLOSS engineers in the UK to contribute their time and effort to projects that the government could use.  Whilst I don’t expect to see FLOSS software used in all tiers of the government, it would be nice to see an acknowledgement of its prominence.  The ability for the UK to regenerate its indigenous scientific and engineering superiority would be significantly enhanced by the uptake of FLOSS solutions, and would perpetuate the benefit to both the FLOSS community and the UK as a whole.

Call to Arms

To all politicians: please don’t wait on the sidelines wondering whether or not your adoption of FLOSS is going to cause offence to any current software providers.  Don’t put second-rate project managers in charge of FLOSS solutions and expect it to perform like and Oracle Database Server in the hands of an Oracle team of engineers.  That’s not how it works.  Encourage your current providers to utilise the FLOSS solution and let them feed their development and integrations back into the FLOSS community.  I don’t want to pretend that you can suddenly call upon  a local FLOSS software house to run a project that is currently run by some outsourced company.  Change the framework to support FLOSS, change it to provide FLOSS with a chance to succeed, and change it for the good of our futures.

iPhone Marketing Review

November 11th, 2007

Well the iPhone was release into UK stores last Friday, and despite already owning the Neo1973 – I thought it’d make sense for me to look at the apple iPhone first hand to see what all the fuss is about.

It’s nice.. really nice. I don’t think such loving and streamlined design has ever been put into a phone before. There’s few places for dirt to get in, no annoying plastic memory card covers that fall off 4 months into an 18 month contract – and the screen is scratch resistant. It’s a nice piece of kit. It’s an iPod touch, with phone & text capabilities.. however, get the iPod touch and save yourself £710. That’s the extra you’ll be paying when you get the phone added to an iPod touch – is it really worth it?

Well, in my opinion no. One of the things I _hate_ about the neo1973 is also a ‘feature’ on the iPhone. I’ve got to being good enough at texting to be able to do it in the dark – without looking at the phone – and be 99% sure I’ve dialed the right combination. I know the important numbers in my phonebook in my head, so that it’s quicker to dial the number than to search for the contact in the address book. What do I rely on the most to do this – tactile feedback. Without having physical buttons to press, the phone take more of my concentration. It’s a good job that driving with a mobile has been banned in the UK – with the iPhone they’ll have to ban walking with one too.

The other thing that’s annoyed me is the marketing. I went through the Manchester Arndale the other day, and paid a visit to an absolutely jammed apple store. Well done apple – that’s great. They had 8 iPhones for people to play on, and there were lots of “apple geniuses” about to help people out. Great – good customer service.

Later in the day, I returned to the Arndale centre, with my flatmate, who wanted to take a look at the iPhone in the CarPhone Warehouse. As we approached the store, there was a crowd of about 10 people outside in navy O2 t-shirts with a big security guard standing guarding the shop. It looked as though there’d been a fire alarm or something and the shop had been evacuated, as there didn’t appear to be anyone near it. As we got closer, I realised the shop was still open, so we went in and had a look.

If I was from Nokia, LG, or any other handset company, I’d be quite annoyed with O2. In the middle of their shop was an ‘apple orientated’ display – which looked like it was a little bit of the apple shop in the middle of an O2 store, rather than an O2 promotion. It had 6 iPhones (bearing in mind the main apple store only had 8 ) which had six guys looking at them (I realised later that two were customers and four were employees). I had to ask an employee to get off their phone so I could have a look.

I did have a little play and got bored quite easily, so stood their whilst my flatmate had a look at all the features, and compared it to his iPod touch. Anyhow, I started talking to two of the assistants about the iPhone. I asked them what it was like to text on without a keypad. Their response was to tell me they knew absolutely nothing, and I had to explain to one of them that you had to press the key at the bottom to go back a menu after she spent a good minute working out how to get to the text screen.

It was a complete waste of time. I asked them if they worked for O2 or apple – and were very pleased that they could tell me they were ‘O2 Angels.’ Fantastic. I thought I’d give them the benefit of doubt and decided to ask them about the Viewty – a review of which I had read the week previous in Stephen Fry’s column in the Guardian. “What’s that?” asked one, and “It might be over there” said the other – gesturing vaguely at a wall full of other phones that O2 sell. Nice.

As I walked out of the shop, I noticed that the people hanging round were also O2 angels – dressed in their O2/apple t-shirts to make it look as though there were actually any customers in the shop. Surprisingly enough no one could give me any figures on how many iPhone they’d actually sold… but there were still plenty available in the shop.

They’re just to expensive for too little extra. Take my advice and get an iPod touch and a nokia 6630. There just isn’t any reason to pay the extra £710 for an apple iPhone. There marketing is well over the top, and I certainly haven’t got £1000 to spend on a phone for 18 months. Especially with only 200 free minutes and 200 texts – it’s a bad deal. The only good thing is the unlimited data transfer* – which is linked to a nice “fair use policy.”

Please don’t tell me I’m being unfair on apple – I love the iPod.. but the iPhone iThink iCan’t iAfford.

Shorewall & OpenVPN & Routing help

November 9th, 2007

12:24 < andylockran> hey guys
12:24 < andylockran> I need some help as I haven’t yet identified the problem, so can’t look for a solution.
12:25 < andylockran> I’m trying to get two identical server clusters replicating over openvpn. As they’re “identical” I
want to keep them with the same ip addresses. In order to avoid IP conflict, port forwarding is set up
for the replication.
12:26 < andylockran> On cluster 1, db forwards it’s ldap port to the firewall machine (which is one end of hte vpn) and the
same happens at the other end on the second cluster.
12:27 < andylockran> However, for this two work, the opposing clients need to be able to see the other firewall machine (the
two vpn ip addresses)
12:27 < andylockran> both db can see their own fw machine’s ip.. but not the other
12:27 < andylockran> can you tell me if it’s a routing, or bridging problem. or whether what I’m trying to do is
incompatible with openvpn?