Official hearing page

17 January 2024 – John Simpkins and Gerald Barnes

Hide video Show video

(10.02 am)

Mr Beer: Good morning, sir, can you see and hear us?

Sir Wyn Williams: Yes, thank you very much.

Mr Beer: May I call John Simpkins, please.

Sir Wyn Williams: Yes, of course.

John Simpkins


Questioned by Mr Beer

Mr Beer: Good morning, Mr Simpkins. My name is Jason Beer, as you know, and I ask questions on behalf of the Inquiry. Can you give us your full name, please?

John Simpkins: John Graeme Simpkins.

Mr Beer: Thank you. You previously gave evidence to the Inquiry on 9 November 2022. That was in Phase 2 of this Inquiry and I think you were told on that occasion that there was a possibility that you may be recalled in a later phase or phases of the Inquiry. Thank you very much for coming back again in this phase, Phase 4.

Since you gave evidence in November 2022 you have made two further witnesses, I think, on 30 August 2023, you made a 24-page witness statement with the URN WITN04110200. Can you turn that up in the folder in front of you, at tab A1.

John Simpkins: Yes.

Mr Beer: If you go to the 24th page, you should see a signature?

John Simpkins: Yes.

Mr Beer: Is that your signature?

John Simpkins: It is.

Mr Beer: Are the contents of that witness statement true to the best of your knowledge and belief?

John Simpkins: They are.

Mr Beer: That statement, is this right, is principally about the provision, the use and the reliability of ARQ data?

John Simpkins: Yes, it’s more about – I was given an extract of an ARQ data and could I discuss what it represents.

Mr Beer: Thank you. Then on 19 December 2023 you made a 10-page witness statement with the URN WITN04110300, and if you go to the tenth page of that, please, in tab A2, is that your signature?

John Simpkins: It is.

Mr Beer: Are the contents of that witness statement true to the best of your knowledge and belief?

John Simpkins: They are.

Mr Beer: That witness statement is principally about something known as the Apex Corner incident; is that right?

John Simpkins: That’s correct.

Mr Beer: Something which you say you discovered between making the second witness statement and the third witness statement?

John Simpkins: I was presented with another photocopy of an extract of an ARQ – sorry, of a report, and asked could I explain this.

Mr Beer: Thank you.

Just by way of summary of your background, because it’s over a year since you last gave evidence, it is right that you studied software engineering at the University of Birmingham.

John Simpkins: Correct.

Mr Beer: You’re a member of the British Computer Society, a chartered IT professional and an Incorporated Engineer?

John Simpkins: Correct.

Mr Beer: You joint ICL Pathway in July 1996 as an Application Developer; is that right?

John Simpkins: That is right.

Mr Beer: But shortly after then you moved away from development work into a support role?

John Simpkins: Correct.

Mr Beer: You worked in the predecessor department to the SSC, third line support, during the period of the national rollout of the Horizon system?

John Simpkins: Yes, I did.

Mr Beer: So you were working there for Initial Go Live in 1996 and 1997?

John Simpkins: Correct.

Mr Beer: You remain there for the course of the national rollout?

John Simpkins: I did.

Mr Beer: You told us on the last occasion that your job title then was Project Specialist?

John Simpkins: Yes, I believe it’s Product Specialist, actually.

Mr Beer: Okay, Product Specialist. Thank you.

Did you have a particular role at that time in relation to the EPOSS software within the Horizon.

John Simpkins: We supported it.

Mr Beer: And what did support of the EPOSS system consist of?

John Simpkins: So if there were any reported incidents, live incidents, mainly we were live for line support, so they would be raised on a ticketing system and those tickets would be passed to us to investigate the evidence.

Mr Beer: What would the investigation consist of and what would you do in the course of the investigation?

John Simpkins: Normally, you would investigate the – well, it depends upon the type of call, there may be events raised in the data centre, there may be a – evidence provided by a subpostmaster or a user. There may be a – another feed of evidence from a database or some other source. Then you would investigate the source of that evidence, and you would probably gather evidence from multiple locations, including the counter, some application logs on the counter. You might look at the message store, which was effectively the database on the counter. You might look at the data centre, where you have harvesters and other agents that worked with that data from the counter, and the databases themselves.

Mr Beer: Would you please responsible for the development of any fixes?

John Simpkins: There was an idea we could look at doing workarounds, so if a workaround was a – either just telling the subpostmaster how to work around a problem, or potentially is there a workaround we can do, an example might be clearing the print logs and things like that, so we can actually clear a log and allow the subpostmaster to continue working. But no software fixes, we didn’t produce, no.

Mr Beer: Who held the responsibility for software fixes at this time, so in the Initial Go Live and then in rollout?

John Simpkins: That would be the fourth line support team.

Mr Beer: The fourth line support team?

John Simpkins: Correct.

Mr Beer: How would they be passed responsibility for writing fixes?

John Simpkins: So I mentioned a ticketing system, it was PinICL originally, then PEAK. So we would add our evidence to that system and then that ticket would be routed to the appropriate team.

Mr Beer: Looking at that period as a whole, ie Initial Go Live and then national rollout, what would your summary be of the nature and extent of the problems with EPOSS?

John Simpkins: There were problems with EPOSS definitely. It was a new system, then – I don’t recall there being that many, mainly because of the amount of staff the SSC had. During –

Mr Beer: Just to interrupt you there, you mean so that the problems would be spread amongst that number of staff?

John Simpkins: Yes and no. Sorry, what I meant was, initially, there weren’t that many staff in the SSC and we weren’t overrun with defects. Then, you’re correct, as the SSC grew, the defects were spread out but we did have specialists in the team that concentrated on different areas. Again, we weren’t overrun. However, during rollout itself there were a lot more calls than post-rollout.

Mr Beer: Who was, if anyone, the EPOSS specialist?

John Simpkins: I would say Anne –

Mr Beer: So that’s Anne Chambers?

John Simpkins: Anne Chambers, Diane Rowe, Dave Seddon, Lina Kiang.

Mr Beer: Do you recall something called the EPOSS taskforce?

John Simpkins: I don’t, but I have seen documentation in this Inquiry.

Mr Beer: So only recently have you become aware of something called the EPOSS taskforce?

John Simpkins: Correct.

Mr Beer: So, at the time, you didn’t know that there was a part of Fujitsu given over to investigating a high number of problems with EPOSS?

John Simpkins: No, I was not.

Mr Beer: Were you aware at the time of a report that the EPOSS taskforce wrote that recommended a rewrite of the EPOSS software?

John Simpkins: No, I was not.

Mr Beer: Therefore, I think it follows that you weren’t aware of the rejection of that proposal –

John Simpkins: Correct.

Mr Beer: – and instead the adoption of a system of active management, as it was called, of the EPOSS system?

John Simpkins: Yes, that’s correct and fixed forward is – yes.

Mr Beer: You weren’t aware of any of that going on?

John Simpkins: No.

Mr Beer: I think you became a Team Leader in 2010 –

John Simpkins: Correct.

Mr Beer: – reporting to the SSC manager. At that time was that Steve Parker?

John Simpkins: That’s correct.

Mr Beer: You remain employed to date by Fujitsu as a Team Leader in the SSC, the Software Support Centre?

John Simpkins: Correct.

Mr Beer: Now, I want to ask you about the different species of data held as part of the audit trail in Legacy Horizon and Horizon Online.

John Simpkins: Okay.

Mr Beer: So can we start, please, with Legacy Horizon. In fact, can we turn to your third witness statement, please. WITN04110300, and page 3, please.

Although this witness statement is to do with something else, the issue that we mentioned at Apex Corner, in paragraph 9 you set out a description of what you call the life-cycle of a transaction in Legacy Horizon. I just want to go through this because is what you’re doing here essentially setting out, stage by stage, what happens when a transaction is undertaken in Legacy Horizon?

John Simpkins: Yes, I would expand it from that. It’s all messages, not just transactions.

Mr Beer: Thank you. So reading through it, you say:

“… messages (including transaction messages) were written to the Riposte message store on the local counter disk.”

Can you explain for those who may not have listened to all of the Phase 2 and 3 evidence what you mean by that?

John Simpkins: Riposte had distributed databases. Every counter had its own message store, which was a non-sequel database, effectively, and the messages that that counter used to operate, including reference data and the transactions it created at that counter, were all stored on the message store which is a file, and that file is on the local hard disk of that counter.

Mr Beer: So if there were five counters in a branch, there would be five local counter disks; is that right?

John Simpkins: Correct.

Mr Beer: If there were ten, there would be ten?

John Simpkins: Correct, and if there was one, there was two counter disks because that was a special case and had a swappable disk.

Mr Beer: You continue:

“They were then replicated locally to other counters within the branch or, in single-counter branches ([like] Apex Corner), to internal removable mirror disks.”

Can you explain what you mean by that, please?

John Simpkins: Yes, so when a message is written it is broadcast immediately to all local neighbours. Riposte has an idea of neighbours and when you set up those five counters in your example, you tell counter 1 about its other four local neighbours, and counter 2 also about its four local neighbours.

When you perform a transaction or any other message on, say, counter 1, it will broadcast that to all neighbouring counters, so that they will get a copy of those messages.

As a single-counter disk, there was a single point of failure on that counter, it had another version of Riposte, effectively, installed on the counter as well.

Mr Beer: That’s the mirror?

John Simpkins: That’s the mirror disk, which is a removable disk, and the messages were again replicated to that. Also, the counter node 1 was also called the Gateway Counter. That had a remote access up to the data centre. So that was the –

Mr Beer: To the correspondence server?

John Simpkins: Correct. So, in the data centre, we had correspondence servers that that counter also replicated the messages to.

Mr Beer: You continue:

“Legacy … was primarily an offline system, so the messages would be sent to the Correspondence Servers periodically or immediately depending on the network configuration. Every branch was assigned to one of four ‘Clusters’ …”

That’s clusters in the correspondence servers?

John Simpkins: Correct, so we had 16 correspondence servers, so four made up a cluster. So if that counter 1 replicated its messages up, you had four copies of that message in the data centre.

Mr Beer: You continue:

“… and this controlled which Correspondence Server messages from that branch replicated to. There were 16 Correspondence Servers, and each one only contained messages for a single Cluster. Once in the Correspondence Servers, the Audit Harvester program would copy all messages from the Correspondence Server (ie a single Cluster) to a series of flat text files labelled by Data Centre, Cluster and date.”

Can you explain what the audit harvester program was, essentially?

John Simpkins: So there was an idea of agents which monitor messages coming in to the correspondence servers. They effectively listen at messages as they’re inserted.

The audit harvester had a filter that basically got every message as it came in and its job was to write it to a flat file, so a basic file on the disk on the correspondence server.

When it got to a certain size, it would switch and start writing another file, and it began a new file each day.

Mr Beer: When you were doing your work in the SSC, so when a ticket came in, on PinICL to start with, which data in that sequence of events that you’ve essentially described, in that process that you’ve described, would you seek to access?

John Simpkins: Initially, the correspondence server because it’s the easiest to get to.

Mr Beer: Why was it the easiest to get to?

John Simpkins: So when you were supporting a live data centre, you had –

Mr Beer: Sorry, the witness statement can come down. Thank you.

John Simpkins: – you had a computer dedicated to that network. It had two-factor authentication and when you logged into it, you would, from there, connect to the data centre, which would allow you to connect to the correspondence servers or databases.

If you wanted to go anywhere else then you would do another hop, as it were, from the data centre.

Mr Beer: In your witness statement – we’re going to look at this in a bit more detail in a moment – you say that the data that you found most valuable to access when carrying out your work was in the message store?

John Simpkins: Correct.

Mr Beer: Where are you referring to in the process you’ve just described?

John Simpkins: That is the correspondence server.

Mr Beer: In the correspondence server?

John Simpkins: It should have all the messages from the counters. If it doesn’t match, effectively, with the call or what is being reported, then you may go down and see if there’s a difference between the correspondence servers and the counter.

Mr Beer: Indeed, that might be one of the very issues you’re investigating: a mismatch between what’s held locally and that which has made it to the correspondence server?

John Simpkins: Potentially.

Mr Beer: So can you explain how, when you needed to access data in the message store, you went about it in the SSC?

John Simpkins: We did have some tools that we wrote, internally ourselves, support tools.

Mr Beer: So you mean you wrote some code in order to get into the message store?

John Simpkins: You could do it multiple ways. One way is with the tooling that we wrote ourselves. If you knew what you were going to extract, Riposte allowed you to have a query language, much like a SQL language.

Mr Beer: You have to explained what that means?

John Simpkins: Sorry, structured query language, a database language, where you could say, “I want this filled, this filled, this filled, or attribute, from this counter in this date range”, and then – so that, if you knew what you were aiming for. If you didn’t know what you were aiming for, you would probably extract it all as text. So the whole lot to one large text file and then start filtering that through text editors.

Mr Beer: So Riposte had it own investigation/query system built into it, a tool for extraction built into it?

John Simpkins: It had a tool to extract, correct, yes. So you could extract, actually the tool – you could use part of that structured query language as well. When you extract the messages, normally you would just extract everything.

Mr Beer: Why was it necessary to write separate tools within the SSC?

John Simpkins: When you came across issues, you would learn to focus your investigation and also some people didn’t have the skills to understand where – write the language that I was talking about, the Riposte grammar, so that if you had a tool, everyone got it right all the time.

Mr Beer: Can you summarise, to your knowledge, the process of the saving of, storage of and extraction of audit data?

John Simpkins: So are we talking about audit data from Riposte into the audit system, sorry?

Mr Beer: No, we’re going to go on in a moment to speak about something which has been described in the documents as ARQ data.

John Simpkins: Okay. I have a limited amount of knowledge about ARQ data.

Mr Beer: As you say in your witness statement, it’s not something that was in your day-to-day use?

John Simpkins: Yes, that’s correct. But I can give you my understanding of what have you.

Mr Beer: Yes.

John Simpkins: So from those flat files we were talking about, which are extracted by the audit harvesters, they are passed to the audit system. The audit system then seals them. So the audit system calculates a check digit on them, and it puts that into a database and then that can be reused later to make sure that it hasn’t – that file hasn’t changed whilst in audit.

Mr Beer: Stopping you there, what did you understand the purpose of the retention of audit data to be for?

John Simpkins: That’s a good question. I presume it was such that when messages that were no longer in the message store, or messages that were no longer in the databases, or files that we passed between us and third parties and third parties to us, when they were no longer available on the live system, we could go to audit and request for them.

Mr Beer: Did you understand, as the title of the data might suggest, “audit data”, that it was to be used for the purposes of some sort of audit?

John Simpkins: I didn’t. I used it as an extension to history of the data that’s available to me.

Mr Beer: You mention that one reason that you understood it was retained was that there was a limitation, a time limit, on the retention of data in the message store.

John Simpkins: Correct.

Mr Beer: How long was that limit? I think it changed.

John Simpkins: It did, yes. I think I’ve – I’ve definitely seen message stores where it’s 42 days. I think it also was 35 days, or something, at one stage.

Mr Beer: So did that mean data in the message store was not available to you if you were conducting an enquiry, an investigation, depending on the relevant time we’re looking at, more than 35 or more than 42 days after the data had been created?

John Simpkins: That’s correct. Some messages do expire. There are some messages that are effectively permanent and objects – you had objects and messages. Objects, effectively, the last version of it was permanent and never expired; but messages, other messages, did expire.

Mr Beer: Did you understand that when audit data, ARQ data, was extracted by Fujitsu and presented to the Post Office, it was presented in a filtered format?

John Simpkins: Yes, I’ve seen some ARQ extracts that look like they are filtered and then put in Excel.

Mr Beer: So the data has been manipulated from its original source into a filtered format?

John Simpkins: Correct.

Mr Beer: Was that something you were aware of at the time?

John Simpkins: Not really, because we – if we requested data from audit, which I believe we did do, we got it back in the basic Riposte –

Mr Beer: Raw format?

John Simpkins: Correct.

Mr Beer: Where did your understanding come from, that, for the purposes of presentation to the Post Office, it had been – I’ve used the word “manipulated”, that might carry unwanted implications. Is there a technical term for it?

John Simpkins: Filtered?

Mr Beer: Filtered.

John Simpkins: Sorry, that’s just an off-the-cuff technical term. I have seen examples of ARQs provided to me.

Mr Beer: Okay. Was software used to conduct that filtering?

John Simpkins: Yes.

Mr Beer: What was that software called?

John Simpkins: Again, I was presented with an ARQ and I think it had a title on the top of the Excel spreadsheet which said RQuery UK, which probably means Riposte Query UK, and on the second tab of that Excel spreadsheet it had the Flower language that was used, which is an XQuery language, to say which fields to pull out and how to filter it.

Mr Beer: Do you know who us wrote that software, that filtering software?

John Simpkins: No.

Mr Beer: Was that in-house again?

John Simpkins: I expect it was in-house, as it said in-house on “RQuery UK”. I would have to talk to Gerald about that.

Mr Beer: So somebody within Fujitsu?

John Simpkins: I would think so, yes.

Mr Beer: What was the purpose, to your understanding, of changing the presentation of the data in this way or filtering it in this way?

John Simpkins: I do not know. I would expect it was to make it more simple to understand. The original Riposte Attribute Grammar is quite – it’s somewhere between XML and JSON format. It’s very structured in itself but not very easy to read and there’s lots of attributes in there that probably won’t make sense unless you have access to the high-level designs.

Mr Beer: Thank you. Can we undertake a similar exercise to the one that you undertook in paragraph 9 of your third witness statement, “The life-cycle of a Legacy Horizon transaction”, for Horizon Online. You don’t do this, because that didn’t arise in relation to the Apex Corner incident, in your witness statements.

John Simpkins: No, that’s –

Mr Beer: Can you broadly described the life-cycle of a transaction in Online?

John Simpkins: Yes, so when the transaction gets settled in Live, in HNG-X, it’s immediately broadcast up to the data centre, to an OSR, which is an online service router, I think. I might have to check that acronym but, if you have 10 OSRs, the messages are broadcast to them, then they will then update the branch database or go via another route such as CDP, which allows you to send messages out to third parties.

So it depends on the type of transaction you were doing but, if you were just doing a basic stamp sale, it would go from the counter to the OSR and be recorded on the branch database and then a response back to the counter to say it was successful.

Mr Beer: Can I ask the same question, when you needed to access data for the purposes of your work in the SSC, in Horizon Online, which data would you access?

John Simpkins: We would go to a database called the BRSS, the branch support database. So the branch support database is, very similar to the branch database, a live one, but it has some replication software, it’s Oracle, so it uses GoldenGate, which copies the software – sorry, all the transactions that happen to the support database, and the support database also keeps data for much longer than is required in the actual live database. So we can go back a year in the support database.

Mr Beer: Thank you. Can I turn briefly then to the role of the SSC. Is it right that the SSC, the Service Support Centre, worked closely with the fourth line of support, Application Support Service, in the identification and resolution of software incidents requiring bug fixes?

John Simpkins: To an extent, yes. So they have no access to the live information, so all evidence would be provided by us. So, initially, we would do an investigation, gather the evidence and then, if we can’t explain it, then it will probably go to the fourth line support team. If they need any more evidence they would come back to us and then, eventually, hopefully, they would be able to get to the bottom of what the issue is.

Mr Beer: I want to look at a service description document from 2009, to see whether what it describes accords with the position on the ground. Can we start, please, by looking at FUJ00080066. Can you see from this page, page 1, the title of the document is “Third Line Support Service: Service Description”?

John Simpkins: Yes.

Mr Beer: So this supposed to be a description of the service provided by the SSC, yes?

John Simpkins: Correct.

Mr Beer: We can see from the top right, the date is 4 September 2008 but, if we just go over the page to page 2, we can see the document seems to have been approved only on 27 January 2009; can you see that?

John Simpkins: Correct, yeah.

Mr Beer: If we go back to page 1, please. We can see the originator of the document is Mik Peach. Was he the manager at this time in January ‘09, so Mr Parker’s predecessor?

John Simpkins: Correct, there was a manager in between them, yes.

Mr Beer: We can see over the page at page 2, about the middle of the page, that a reviewer appears to have been Mr Parker himself, yes?

John Simpkins: Yes.

Mr Beer: So this document is, essentially, is this right, a summary description of what you in the SSC were mandated to do?

John Simpkins: Yes.

Mr Beer: Can we go to page 14, please. This is under a big heading on the previous page, we needn’t look at it, “Dependencies and interfaces with other operational services”, and there’s a list of interfaces, so interrelationships. It’s paragraph, so the second paragraph down on that page. If that just can be enlarged, please – thank you:

“The Application Support Service (fourth line) and the Third Line Support Service work closely together in the identification and resolution of Software Incidents requiring bug fixes. If the scope of either the Application Support Service (fourth line) or the Third Line Support Service is changed, the completion of Software Incident bug fixes would be the responsibility of the remaining service.”

What’s that saying?

John Simpkins: The first part is saying that –

Mr Beer: Sorry, my mistake. What’s the second part saying?

John Simpkins: Oh, the second part? I have no idea. I’m presuming it’s either talking about merging third and fourth line, or eliminating one of them, say fourth line would make sense, so third line would also have to do bug fixes.

Mr Beer: I see, so it’s talking about if there was a change to the way that the support service was provided, that either third line was extinguished or fourth line was extinguished or changed, then the responsibility described would vest in the remaining bit?

John Simpkins: That’s my reading, yes.

Mr Beer: Do you agree with the description of the interaction of the third and fourth line support services?

John Simpkins: Err … yes. We also provided other things, other than just software issues. We did lots of reporting. There was facilities we provided other than just this but, yes.

Mr Beer: To what extent was third line support involved in fixing bugs?

John Simpkins: We didn’t actually do the fixes but we would help identify the fixes, so we would provide the – our investigation, we would provide further evidence from Live. I think that’s probably it.

Mr Beer: So the SSC, would this be fair, should have good visibility on the existence of bugs and the steps taken to fix them?

John Simpkins: We would have good visibility of bugs. Once the ticket with all the evidence required is with fourth line, then it may go off our visibility. In theory, we would have probably created a knowledge article for that defect, so that when another person gets a call they can identify that that’s already been identified and the call is already with development.

Mr Beer: A knowledge article, is that different from a KEL?

John Simpkins: Sorry, it’s a KEL.

Mr Beer: Just explain what a KEL is to the uninitiated?

John Simpkins: A KEL was Known Error Log. It’s a repository of knowledge articles that the first, second, third line used. When we were investigating incidents, we could search it with the symptoms that were provided to us and, hopefully, find out that – whether this incident has been seen before, if there is a workaround, what information do we need to gather if it’s an ongoing investigation?

Mr Beer: So, although it wasn’t intended for this purpose, would you agree that, if somebody asked the question from, say, outside the organisation in 2005 “What known bugs are there in the Horizon system and what steps have been taken to correct them”, the Known Error Log would be a good place to start?

John Simpkins: It’s a good place to start, but it depends on the – how well that was house kept. So when the defect was resolved and fixed, that needed to be fed back and –

Mr Beer: Just stopping there, how well was it maintained?

John Simpkins: I would say reasonably. I wouldn’t say it was perfect, I would say reasonably.

Mr Beer: Why wasn’t it maintained more than reasonably well?

John Simpkins: Because when the defect was closed, there was quite often cloning of PEAKs and when a defect was closed it may not be matched up to that KEL when it came back for closure.

Mr Beer: You’ll have to decode that, I’m afraid. I think I understand what you mean but can you explain in simpler language, please?

John Simpkins: When a defect PEAK/PinICL goes to fourth line, they could clone that ticket, especially if there is more than one part that needs fixing, and when they have released the fix, it may come back to us that that PinICL or PEAK would come back to third line team for closure because it was originated there, effectively, on the PEAK/PinICL system, so when you do a final progress it routes it back to the originator.

The pre-scanner in the SSC at that time would either pass it back to the person who originally handled it to make sure that is a reasonable closure or they may close it themselves, and it’s relied upon them to make sure that they were aware of a knowledge article and update it.

Mr Beer: So I think we agreed that if I was asking the question in, say, 2005, of what known errors or bugs there were in Horizon, the Known Error Log would be a very good place to start?

John Simpkins: It’s a good place to start but you would need the PinICLs or PEAKs to go with it.

Mr Beer: On what system was the Known Error Log kept?

John Simpkins: It’s the SSC’s own corporate system, managed by us. There were multiples throughout history where we managed it ourselves and then effectively moved on to Fujitsu’s own internally managed services and then it was just a virtual machine on that.

Mr Beer: Who had access, other than members of the SSC, to the Known Error Log?

John Simpkins: I believe the first line, second line, third line and fourth line all had access to the Known Error Log – all Fujitsu staff, sorry.

Mr Beer: So it was a well known repository of information?

John Simpkins: Correct.

Mr Beer: Indeed, that was its very purpose –

John Simpkins: Correct.

Mr Beer: – that people knew about it and it’s perhaps the first thing one might reach to if a seemingly new problem arose?

John Simpkins: Correct.

Mr Beer: So they had electronic access to it, first, second and fourth line support?

John Simpkins: Yes.

Mr Beer: What about outside the support teams that you’ve just listed; anyone else have access to it?

John Simpkins: I don’t believe so.

Mr Beer: In the period from 2000 to 2010, were you aware of any challenges to the integrity of Horizon data being raised by subpostmasters?

John Simpkins: I was aware of any incidents raised during that time.

Mr Beer: I mean, that was your work, essentially, on a daily basis?

John Simpkins: Exactly.

Mr Beer: Should each of those have resulted in either a decision to create a KEL, a Known Error Log, or to check whether the issue being raised was adequately covered by an existing KEL?

John Simpkins: It should have been.

Mr Beer: Was that always done?

John Simpkins: I believe so. You could also search the PEAK system to see if there’s any similar issues listed in PEAK. You could search the first line Helpdesk system to see if there’s any similar issues there, as well as the KEL system.

Mr Beer: Is it right that the SSC was not generally responsible for reporting issues or the outcome of investigations or the outcome of bug fixes back to the Post Office?

John Simpkins: The ticket itself would be reported back. It had to, I believe first line had to agree closure if the ticket came through first line. But Service Management would do that, while we were Incident Management, not Service Management.

Mr Beer: So there was something called the Service Management Team; is that right?

John Simpkins: Correct.

Mr Beer: Was that also based in Bracknell?

John Simpkins: Yes.

Mr Beer: So they were the point of contact back to the Post Office; is that right?

John Simpkins: Correct.

Mr Beer: Did they have access to KEL?

John Simpkins: I can’t remember. I imagine they did but I can’t remember, exactly, no.

Mr Beer: Did the Post Office have direct access to the Known Error Log?

John Simpkins: No.

Mr Beer: In your dealings with the Post Office, would you understand that they knew of the existence of the Known Error Log?

John Simpkins: I don’t know. I imagine we probably did refer to them quite often. When we talked about an incident, we would refer to a KEL reference, so that –

Mr Beer: Why would you be referring, when you talked to the Post Office, to a KEL reference?

John Simpkins: I can’t remember any instance of talking to the Post Office but –

Mr Beer: No, but, generally, why would you be talking about KELs?

John Simpkins: Because it allows you to describe that there is a known issue, we have referred it – to it, this is, effectively, a tracker of type for it. It has been logged.

Mr Beer: You told us on the last occasion that Mr Peach, Steve Parker’s predecessor who left in 2009, introduced something called the Service Management Portal or the SMP, which was a website on to which was placed reports –

John Simpkins: Correct.

Mr Beer: – and that the Post Office had direct access to the SMP?

John Simpkins: Yes.

Mr Beer: With what frequency were the reports written?

John Simpkins: You would probably have to ask Mik, however, I think it was monthly reports but, presumably, he updated them throughout the month and then published them. But I can’t be certain, I’m afraid.

Mr Beer: What were the monthly reports placed on to the Service Management Portal about?

John Simpkins: I believe they were about service impacting issues.

Mr Beer: What do you mean by that?

John Simpkins: So any issues, any notable defects, any work that we had done for the Post Office, any reports we had produced, kind of metrics about what had happened in that month.

Mr Beer: So if there had been a bug, error or defect identified and a fix applied to it, or some new code written to try to correct the error, is that the kind of thing that would be described in the monthly reports?

John Simpkins: I expect so. Again, I would refer to Mr Peach though.

Mr Beer: Outside of that, the monthly reports on the Service Management Portal, was there any formalised mechanism for informing the Post Office about bugs, errors and defects within the Horizon system?

John Simpkins: I would expect that would be through the Service Management Team.

Mr Beer: So that was the tool, was it?

John Simpkins: Sorry, not the Service Management Portal, the Service Management Team.

Mr Beer: Sorry, the Service Management Team.

John Simpkins: Sorry.

Mr Beer: How many people worked in the Service Management Team?

John Simpkins: I think about half a dozen.

Mr Beer: How did they get their information about what to tell the Post Office?

John Simpkins: Probably from the first line, third line. I’m not sure where else.

Mr Beer: How physically would they get that information?

John Simpkins: I know that Mr Peach provided a monthly report to the Service Management Team.

Mr Beer: So the same thing, the thing from the Service Management Portal, or a different species of report?

John Simpkins: I don’t know. I remember he – mentioning he produced a monthly report.

Mr Beer: Will you agree that there was a mechanism by which Fujitsu told the Post Office what issues had arisen with the Horizon system, how they had been detected, how widespread the issue was, whether the issue affected financial data and, in particular, balancing?

John Simpkins: Yes, I believe that was the Service Management – sorry, Service Management Team’s function. We definitely did scoping when an incident happened to try to work out how large an effect it has and who was affected.

Mr Beer: Ie whether it affected more than the one branch that had, for example, reported the issue?

John Simpkins: Correct. Once you know what marker that issue has, you can search for it.

Mr Beer: Can I press you on how the Service Management Team got its information from you in the SSC?

John Simpkins: I would say that would be fed through our manager.

Mr Beer: By?

John Simpkins: Through our manager, Mik Peach.

Mr Beer: How would your manager get the information?

John Simpkins: He would get that from us.

Mr Beer: How would he get it from you?

John Simpkins: Um –

Mr Beer: You’re working away in one corner of the room, administering tickets, Anne Chambers is in another corner of the room administering tickets, there are another up to 25 people in the room administering tickets, looking at your stack of tickets, processing them, getting through all of the work. How was that all of the information that you were creating, that you were administering, translated to Mr Peach and then Mr Parker, got over to the SMT and then got over to the Post Office?

John Simpkins: I believe it would have just been talking to him. He sat in the centre of the office and we would tell him what issues we’ve got if there’s anything new.

Mr Beer: To your knowledge, did he, for example, regularly periodically, say monthly, look at all of the PinICLs or PEAKs that had been administered by the team and extract from those the information that he judged it was necessary for the Post Office to know about?

John Simpkins: I don’t know. I would have to ask him. I don’t know how he did his round robin of what the team has done that month.

Mr Beer: You mention him sitting there. Presumably, he wasn’t there 24 hours a day –

John Simpkins: No, he wasn’t.

Mr Beer: – and I think there were shift arrangements; is that right?

John Simpkins: No.

Mr Beer: No.

John Simpkins: So we worked from roughly 8.00 until 6.00. Core hours were 9.00 until 5.30. There was an out-of-hours support rota but that was just a team that worked normal hours, and they would provide out of hours support as well, passing a mobile phone round effectively. There was no rota.

Mr Beer: So you would be sitting in your chair and he would be in the room somewhere and you’d say, “Mik I’ve got a new one here”. What would happen then?

John Simpkins: He would make a note of it, presumably, in his records. I think he had database form for entering it directly into the SMP. I don’t know if that’s there he kept his records. But he definitely had a database form which he would type it up on.

Mr Beer: He had access to the PEAKs and PinICLs himself?

John Simpkins: Of course.

Mr Beer: So he could go back and check the ticket to see what had been done or not done?

John Simpkins: Correct.

Mr Beer: Did you ever see the reports that were passed to the SMT or put on the Service Management Portal?

John Simpkins: I have, yes.

Mr Beer: You have now. Did you at the time?

John Simpkins: Yes. Quite often, if you wanted to get the details from him, you sat next to him as he typed it up.

Mr Beer: So you could dictate or narrate what the issue was?

John Simpkins: Correct.

Mr Beer: What did you understand the purpose of this communication of information to the Post Office to be for?

John Simpkins: He was talking to Fujitsu at that time. He wasn’t talking to –

Mr Beer: You mentioned that he wrote a monthly report that went to the SMP –

John Simpkins: Oh, yes, correct. I don’t know.

Mr Beer: – which went to the Post Office?

John Simpkins: Yes, for the SMP, I don’t know. We presumably had some agreement that he had to supply something or – it was very much off his own back, the SMP. I think he felt that they needed some information and he went round getting the server put in, and producing the software for it.

Mr Beer: When did you first become aware that data produced by the Horizon system was used for the purposes of criminal investigations and criminal proceedings against subpostmasters?

John Simpkins: Anne Chambers was asked to provide evidence.

Mr Beer: So that would have been about 2006?

John Simpkins: Yes, I think.

Mr Beer: Before then, ie from rollout until 2006, did you not understand that the data was being used to investigate criminally and then bring proceedings against subpostmasters?

John Simpkins: I don’t believe so, no.

Mr Beer: The case in which Anne Chambers was involved was, in fact, a civil case?

John Simpkins: Right.

Mr Beer: Did you know that at the time?

John Simpkins: No.

Mr Beer: Did you just understand it to be some form of legal proceeding?

John Simpkins: Correct.

Mr Beer: Can we go to page 2 of your witness statement, please, your second witness statement, WITN04110200. Page 2, paragraph 5. If we just blow up paragraph 5, please, you say:

“… the SSC does not use and has never generally used ARQ data in the course of its investigations. Instead, for example in the context of Legacy Horizon, the SSC referred to copies of the original Riposte message store for the relevant branch when investigating and diagnosing potential issues with the system. In this regard, the raw message store contained information additional to that in the filtered ARQ spreadsheets, and provided a much more comprehensive account of the data held in the audit archive.”

So the SSC did not generally use ARQ data but used a message store. Was that because there was more data held in the message store beyond that which was produced as a result of a filtered ARQ request?

John Simpkins: Yes.

Mr Beer: What extra information was available in the message store, as opposed to the audit archive?

John Simpkins: I’m differentiating, I think, between ARQ here and I think the raw is held in the audit, but the ARQ is filtered.

Mr Beer: That’s not precisely what you say here, is it? You say:

“… the raw message store contained information additional to that in the filtered ARQ spreadsheets …”

My question is: what additional?

John Simpkins: Sorry, I was trying to – yeah, okay. So there is more data in the raw Riposte message store. However, I do believe the raw message store is available from audit. The ARQs I’ve seen are filtered and only put out certain fields.

Mr Beer: Okay, so – got it. So there’s three things we’re talking about.

John Simpkins: Yeah.

Mr Beer: Message store number 1, filtered ARQ data number 2, and ARQ audit archive, number 3?

John Simpkins: No.

Mr Beer: Okay.

John Simpkins: I think I’m just talking about two things: filtered ARQ and raw message store. So the reason you would go to audit is if it’s been archived off and you can get the raw Message Store. The ARQs I’ve seen aren’t – are – because they are filtered are missing a lot of relevant messages would be looking at.

Mr Beer: My question is, what a lot of relevant messages are they missing that you would be looking for?

John Simpkins: Okay, such as reference data. So reference data controls how the counter operates.

Mr Beer: So just explain to us – many of us know but for those that don’t – what reference data is, please?

John Simpkins: So reference data is configuration information for how the counter operates, what it can sell, how much it will sell it at, what buttons and configuration is available to it. When you do some transactions, how – what are the steps of those transactions take?

Mr Beer: Thank you. So it would be missing reference data?

John Simpkins: Those ARQs just seem to be events and transactions that I’ve seen so far.

Mr Beer: Okay, you were in the middle, I think, of providing us a list of things that were missing.

John Simpkins: Yes, there would be additional attributes that aren’t in those ARQs I’ve seen.

Mr Beer: Such as?

John Simpkins: Such as the NUM. So each message is written with the group ID, which is the branch, node ID, which is the counter position, and NUM, which is a unique incrementing counter. That allows you to see exactly what messages have been produced and you won’t miss any, and gives you the order that they were committed to the message store.

There will be other attributes such as if you were doing a banking transaction, you have a request, authorisation, confirmation, handshake between the data centre –

Mr Beer: Just explain what a handshake is?

John Simpkins: So when you start doing a banking transaction, you would write a request message in at the message store. That gets transmitted to the data centre, picked up by an agent. The agent goes to the banking engine, sends it on to the financial institute. Get it back with an authorisation, which goes back down through the agents, back down to the counter, the counter says, “Okay, that’s been authorised”, and then you confirm it at the counter.

That gets harvested back up to the data centre and then we would reconcile that. So the handshake is the passing of the messages backwards and forwards.

Mr Beer: Of course you’ve listed two species of data that are missing from filtered ARQ that you could see in the message store. Is there a third?

John Simpkins: I think there was many. I’m struggling to recall different types but almost anything that it – AP transaction –

Mr Beer: Explain what AP transactions are?

John Simpkins: Automated payments. Automated payments are like your bill payments, BT payments, things like that. Again, the system would write recovery data when you’re halfway through, until you’ve completed, so that, if it failed, it would take that recovery data and ask you about that transaction that was partially completed.

Mr Beer: Does all of this explain why you would go to the message store and not to filtered ARQ?

John Simpkins: Yes, because you see the whole picture.

Mr Beer: Would you agree that it’s unwise to seek to base conclusions on the basis of the filtered ARQ data, in particular as to the health and integrity of the data that Horizon has produced?

John Simpkins: The health you could not decide from those ARQs. The integrity of the transactions you may be able to, if you’ve got the physical paper copies as well in the branch: you could do a comparison between what the system has and the branch has.

Mr Beer: If we go forward to paragraph 12 of your witness statement, please, that’s on page 4. You’re here referring to where you refer to the ARQ spreadsheet, that’s a spreadsheet you were asked to analyse – I’m not going to ask you any questions about it – in relation to Mr Lee Castleton and some days of ARQ data at the Marine Drive branch?

John Simpkins: Correct.

Mr Beer: You say:

“… if I refer to that ARQ spreadsheet by way of an example, my view is that the data provided in the ARQ spreadsheet does not contain sufficient information for a postmaster to assess the healthcare of the Horizon system at their branch. The ARQ spreadsheet shows only those transactions recorded by the system. It shows there were no receipts and payments mismatch within those transactions and that there were no system [faults] that required recovery. However, it does not demonstrate the health of the system beyond those parameters.”

You say in that paragraph “The ARQ spreadsheet shows only those transactions recorded by the system”; can you see that, the fourth line, second sentence, yeah?

John Simpkins: Yes.

Mr Beer: What did you mean by that, “The ARQ spreadsheet shows only those transactions recorded by the system”?

John Simpkins: The ARQ that was presented was a filtered subset of just the transactions.

Mr Beer: Are you, by that sentence, also stating that the additional message store data that you have referred us to today may assist in showing the existence or the conduct of transactions as between the local counter and the centre, that are missing from the ARQ spreadsheet?

John Simpkins: If you were to have failed banking transactions, for example, or an AP transaction that’s still yet to be recovered, then I would agree.

Mr Beer: They wouldn’t show up?

John Simpkins: I’m just trying to think. The banking one, whether that would show up as a zero value failed transaction or not. It may still show up that there was a nil banking transaction, but if the AP one was not completed then I don’t believe that would show up.

Mr Beer: To take an example, if we go to your third witness statement, please – sorry, your second witness statement – at page 17, paragraph 34, if that can be blown up, please. You’re here addressing an issue that the Inquiry ask you about, which concurrent or simultaneous logins, yes?

John Simpkins: Mm-hm.

Mr Beer: You say:

“Although there have been issues with concurrent logins … an initial observation is that the ARQ spreadsheet [that’s the same one we’re talking about] for this instance does not appear to contain evidence that a user was logged on to two counters simultaneously.”

I’ll miss the next bit out. Then you say:

“In order to determine more conclusively what happened at the branch, access to the raw message store would be required.”

Does that paragraph there and what you tell us in it, reflect the fact that you as an expert in the operation of the system or a person with expertise in the operation of the system, would not be prepared to draw a conclusion on the ARQ data alone.

John Simpkins: Yes, it does. Because I am talking about a session transfer, and a session transfer writes multiple messages as it takes the transactions from one counter, puts them in a blob attached to a message and then transfers it other counter, and you can clearly see that in the message store.

Mr Beer: So you wouldn’t be prepared to draw conclusions without access to the raw message store and would you say that it would be wrong to ask other people to draw conclusions on the basis of just the data that appears on the filtered ARQ spreadsheets?

John Simpkins: That point in 34 that I’m talking about, I would about 99 per cent sure that is what’s happened from the evidence between the events and the transactions. Your question is very much wider but I would say, yes.

Mr Beer: Thank you. That can come down.

Was the known or was the limitation in ARQ data widely known or recognised within the SSC, ie the limitations of the ARQ data that you have mentioned to us today, was that known within the SSC widely?

John Simpkins: Not really. If we requested the information from the audit, we would have got it in the raw format. We wouldn’t have had it in those Excel spreadsheet formats.

Mr Beer: Did you know at the time that what was being presented to the Post Office and then used in court was the type of filtered ARQ data that you have now seen in the case of Mr Castleton?

John Simpkins: I don’t believe I saw that, no.

Mr Beer: Forget his case.

John Simpkins: Yeah, sorry.

Mr Beer: Individually, I’m using that as an example.

John Simpkins: I don’t believe so. We did use to get PEAKs passed to us with events – counter and data centre events on to filter and say do any of these events have any impact upon a branch? They were in Excel spreadsheets but, again, they looked like a complete extract from the Tivoli database for the events but, no, I don’t recall any ARQ in those format.

Mr Beer: For example, in your experience, would Ms Chambers, Anne Chambers, have been aware that there was substantially more data available in the message store than was provided in a standard ARQ package?

John Simpkins: I’m sure Ms Chambers would have gone to the raw data, as well, to do any analysis, yes.

Mr Beer: That is a different question, where she would have gone. She would have done, I think, the same as you and gone to the raw data. I’m asking whether you think others, including her, were aware that the data being presented to the Post Office in the filtered ARQ format contained substantially less data than was available?

John Simpkins: I’m sure if she saw that ARQ spreadsheet, she would have known and, if any of the SSC saw that, they would have known, but I wasn’t aware of what the ARQs looked like.

Mr Beer: Were you or, to your knowledge, any of your colleagues in the SSC ever asked to provide Fujitsu advice on the range of data that was available and which, therefore, ought to be presented for the purposes of civil or criminal investigations?

John Simpkins: No.

Mr Beer: Was that ever a matter of discussion, so far as you were aware?

John Simpkins: No.

Mr Beer: Were you aware of a branch within Fujitsu called Litigation Support?

John Simpkins: From yesterday, yes.

Mr Beer: You only learnt that yesterday?

John Simpkins: Correct.

Mr Beer: Does it follow that Litigation Support, the people that were providing the ARQ data to the Post Office, never spoke to you or, to your knowledge, anyone within the SSC about the range of data that was available, additional to that which they were sending over in the ARQs?

John Simpkins: I would have expected, if they were concerned – well, I would have expected that that – these ARQs had been designed by someone, they would probably have been architects and they – I am presuming that they have been agreed with the Post Office. That should have been an architect-level discussion about what is available and what should be provided. I don’t know if that’s a Litigation Team level. I would have thought they would just provide what has been designed in the system for them.

Mr Beer: Going back to paragraph 12 of your witness statement, please, that’s on page 4 – that can be blown up, thank you – you say, second line:

“… the ARQ spreadsheet does not contain sufficient information for a party to assess the health of the Horizon system at their branch.”

Then the last line:

“… it does not demonstrate the health of the system beyond those parameters.”

What do you mean by the “health of the system”?

John Simpkins: I would expect events, so Windows events of the counter itself. I would expect events from the data centre, mainly the harvesters, to say if there was any issue harvesting the data written by the branch. I would have thought about the logs that were written at the data centre – sorry, at the counter, audit PS standard logs. There are reports generated at the data centre where it’s checking the transactions as they’re entered into the databases, for receipts and payments and they regenerate cash accounts, those reports. I would have thought about the tickets raised, if there were any, PEAK and as well as TfS, so if our own internal systems picked up, for example, any issues, they may be raised as a PEAK/PinICL, as well as the TfS raised ones.

Then, going back to the Riposte, I think I detail in here I would have taken from the balance messages written in the Riposte system, so you – when you’re calculating creating the current balance of, for example, cash, you would take what was the opening position for your current cash account, you would add up all the transactions for your current cash account and then you compare that to the declaration that the subpostmaster enters and then you will see if there’s any discrepancy.

As the subpostmaster is doing the overnight cash holding every night, you should be able to quickly see if there is a diversion between the system generated figure and the subpostmaster’s entered figure and so that would be the point when you start investigating.

Mr Beer: Was the first time that you saw a filtered ARQ spreadsheet provided to a subpostmaster when we, the Inquiry, showed it to you for the purposes of this Inquiry?

John Simpkins: I believe so, because that’s the first time I noticed the RQuery UK and the Flower language because, in my second witness statement, I think I had some trouble working out whether a time was the start time of a transaction or the time it was committed, and I worked out it has to be the start time. But, by seeing the spreadsheet presented to me in the third witness statement, it actually has the filter there, you can see it’s the start time. So that would have helped me with my second witness statement.

Mr Beer: So it was only in 2023 that you saw the type of data and the extent of the data and how it was being presented that was being transmitted from Fujitsu over to the Post Office for the purposes of criminal proceedings?

John Simpkins: I believe so, yes.

Mr Beer: To your knowledge, did anyone in Fujitsu ever explain the limitations of the data that was being provided to the Post Office?

John Simpkins: No.

Mr Beer: Sir, that would be an appropriate moment to take a break in the topics that we’re addressing.

Sir Wyn Williams: Before we do, can I just ask Mr Simpkins – this is just to check that I haven’t misunderstood earlier evidence by other witnesses, Mr Simpkins, so if you can’t answer, that’s not a problem – but when we heard extensive evidence from Mrs Chambers, she told us two things: essentially, she was unhappy with her experience in giving evidence in the Lee Castleton case; and, secondly, that she’d written quite a detailed memo about her experiences and what she thought ought to happen as a result of it.

My recollection is that, after that, nobody in third line support actually did give evidence in either civil or criminal proceedings. Have I got that right, as far as you’re concerned?

John Simpkins: Yes.

Sir Wyn Williams: Fine. So it follows that, to this day, and you’re still there, as you’ve told us, no one from third line support has given evidence in a criminal or civil trial and, as far as you’re aware, no one in third line support has made a witness statement; is that correct?

John Simpkins: Only to the Inquiry, yes.

Sir Wyn Williams: Yes, sure. I meant a witness statement in civil or criminal proceedings.

John Simpkins: Correct.

Sir Wyn Williams: Fine. Then this is a long shot: when Mr Beer was asking you questions about the SMT disseminating material to the Post Office, he used the expression “the Post Office”. Do you happen to know which department of the Post Office that sort of information might have been disseminated?

John Simpkins: I don’t, I’m afraid. I even added users into that system. I remember doing that, adding their logins, but I have no recollection of who it was or what parts of –

Sir Wyn Williams: All right. Thank you very much.

What time shall we start again, Mr Beer?

Mr Beer: 11.35, please, sir.

Sir Wyn Williams: Fine.

Mr Beer: Thank you.

(11.20 am)

(A short break)

(11.35 am)

Mr Beer: Good morning, sir, can you continue to see and hear us?

Sir Wyn Williams: Yes, I can, thank you.

Mr Beer: Can we turn up page 4 of your witness statement, please, your second witness statement, and look at paragraph 14 at the bottom. You say, if that can be expanded, thank you:

“Beyond the data described above, it would also have been useful for the postmaster to have visibility of (i) the opening figures from the last rollover, (ii) a running total of the sales, and (iii) the daily cash and stamp declarations made by the postmaster. Access to these records would have allowed the postmaster to compare the Horizon generated figures against the declarations made by the postmaster from the point of the last rollover. A comparison of these figures would show the point at which the two figures diverged, allowing the postmaster then to check what was happening at the branch at that point in time.”

Is it right that the three species of data that you mention there are not shown on the filtered ARQ data?

John Simpkins: So the opening figures are not, the declarations are not, however, the transactions are.

Mr Beer: So (i) no; (ii) no; but (iii) yes?

John Simpkins: No, sorry: (i), no; (ii) you have the sales in those ARQs – is yes; (iii), no.

Mr Beer: As far as you were aware, was there any facility for a subpostmaster in branch to either run reports on Horizon which would generate that information, in categories (i) and (iii), or otherwise to keep track of that information by some other means?

John Simpkins: Yes, the – one would be from the stock unit rollover, would detail the opening figures.

Mr Beer: How would the subpostmaster obtain that information in branch?

John Simpkins: When they do the stock unit rollover, the printout on that will display the opening figures.

Mr Beer: So they could access their print –

John Simpkins: Correct.

Mr Beer: – from the previous rollover?

John Simpkins: Yes. The – there was sales transaction reports available in branch. You would enter in a list of parameters to the query report to say which stock unit, which start date, which end date, things like that – I think you could input what product, I can’t remember exactly – and the events. So the declarations would be shown as events and you did have event reports as well.

Mr Beer: Do you know whether the subpostmasters were trained to use a reporting facility within Horizon to generate material of that kind?

John Simpkins: I don’t. I have seen the – there was a pack of training material that detailed usage of some of these reports.

Mr Beer: Was the three species of information that you set out there, information that you or your colleagues at Fujitsu could generate with relative ease?

John Simpkins: Yes, they would be in the message store.

Mr Beer: Were you sometimes asked to provide that information?

John Simpkins: We probably provided it in incidents where we were investigating. I can’t give you any examples but I’m sure we would have pulled that information out.

Mr Beer: Can we turn, still in connection with the species of data available to Fujitsu and that which was passed to the Post Office, to some emails concerning ARQ filtering. Can we start, please, by looking at FUJ00230912.

This is a series of emails between you, Steve Parker and Anne Chambers on 14 May 2010, which seems to reference how filters are applied on ARQ requests concerning events?

John Simpkins: Yes.

Mr Beer: Can we start on page 3, please. At the foot of the page the originating email from Mr Parker to you and Anne Chambers with the heading “ARQ and event filtering”. He says:

“The event lists we are being asked to check on [that’s Horizon Online] ARQ requests are just unmanageable (7-10,000 rows in the SYSMAN3 details).”

Can you explain what SYSMAN3 was, please?

John Simpkins: It was the version of the Tivoli system which was the one that harvested the events from counters and the data centre.

Mr Beer: So what’s the issue that Mr Parker is raising there?

John Simpkins: We used to get Excel spreadsheets passed to the SSC with events that had been harvested in a date range and asked would these events be of any – have any impact upon a counter? And, because it was from the data centre as well as the counter, it was a lot of events could have happened during that period.

Mr Beer: Why were you being asked to check for events?

John Simpkins: I’m not totally sure but they were using the SSC as people who may be able to say whether an event may have been an important one impacting a counter. That was my understanding, and –

Mr Beer: The “they” in that sentence, who was doing the asking?

John Simpkins: The – it was the Security Team, the people who would handle the ARQs.

Mr Beer: Security Team within Fujitsu?

John Simpkins: Yes.

Mr Beer: So they were asking you to look at a lot of data –

John Simpkins: Yes.

Mr Beer: – and see whether there was anything in the data which might contain a relevant event, an occurrence, that impacted on the, what, integrity or reliability of the data?

John Simpkins: The operation, I would say, yes. So we would get a large Excel spreadsheet here, saying 7,000 to 10,000 events on it and asked to filter those to see if any could have an impact on the counter’s operation. It was a lot of data, it took a lot of time. We generally used the KEL system to say, “Go to event 1, is that in the KEL?” If not, that takes out, say – you would order them to 1,000 and then “Go to another event, is that going to be of any – problematic?” No. That might take out 500.

Then you’d keep going until you’ve got, say, a page of events and then try to work out if those may have had any impact on the counter.

Mr Beer: So you were being asked to, essentially, vouchsafe the data that was going to ultimately be provided to the Post Office to see whether it included any events that would affect the reliability of the data?

John Simpkins: I believe so.

Mr Beer: Did you know what the data was being used for, the end use of it, ie in investigations and prosecutions?

John Simpkins: I didn’t know about prosecutions but we did know that this was going back to Post Office.

Mr Beer: What did you think it was going back to Post Office for?

John Simpkins: For when someone has requested was this counter working correctly?

Mr Beer: Why would they want to know whether a counter was working correctly?

John Simpkins: That’s my day job.

Mr Beer: Sorry?

John Simpkins: It’s part of my day job.

Mr Beer: Yes, but why did the Post Office want to know whether a counter was working correctly?

John Simpkins: I don’t know. You’d have to ask them.

Mr Beer: Why did you think they wanted to know whether a counter was working correctly?

John Simpkins: I imagine that they had a query, saying was this counter working correctly at this time and, therefore, they have got a specific request about a counter not functioning correctly at that time.

Mr Beer: The purpose of asking, Mr Simpkins, is whether you knew that the exercise you were engaged in may result in an answer or an assertion that was being fed to the Post Office and they would use the product of the work that Fujitsu had done, including your work, to base a criminal investigation or criminal prosecution. Did you know that –

I –

Mr Beer: – by this time, May 2010?

John Simpkins: I’m not sure if I knew it would go back for a criminal or civil investigation but I knew that it was going back to the Post Office.

Mr Beer: The email continues:

“We are allowed to filter out where the event is known to have no financial impact on the counter.”

What does that mean? Who was doing the allowing there? Who said it was okay to filter out events that were said to have no financial impact on the counter?

John Simpkins: I believe that we were being asked by the Security Team to do this filtering.

Mr Beer: Mr Parker there says “We are allowed to filter out things that are known to have no financial impact”.

John Simpkins: Yes.

Mr Beer: Do you know who granted that permission?

John Simpkins: No. I know where the request was coming – from the Security Team on the PinICL.

Mr Beer: It says that permission has been granted where the event is known to have no financial impact on the counter. Do you know why you were allowed to filter out such known events?

John Simpkins: I think we were trying to help by reducing the quantity of events that will be sent back to the Post Office. So there’s 10,000 events here. If we can help say “These ones are known to be benign from our systems”, then only, say, 500 or something events might go back.

Mr Beer: Is it right that you don’t understand how that agreement or position had been reached, ie this level of filtering out was permissible?

John Simpkins: No, I don’t know how that got reached.

Mr Beer: You know that the practical effect was to reduce a big number of events down to a smaller number of events?

John Simpkins: Correct.

Mr Beer: The email continues:

“We need to get the ARQ filters up to date for [Horizon Online] quickly to make the situation manageable.”

What does that mean?

John Simpkins: So we could feed back to Gerald and his team the events that we believe are benign and they would hard code a change in their filters to take those events out.

Mr Beer: Is it right that that implies there was a lag in recognising, for the purposes of Horizon Online, event filters?

John Simpkins: Yes. So I believe that, because it was a totally new from the ground up system, there was suddenly a lot of events written by a lot of new data centre servers, and no thought had been done to which of these events could be filtered.

Mr Beer: Was a record kept by you of the steps taken in this filtering process?

John Simpkins: There probably was a work instruction or a how to help.

Mr Beer: That’s a slightly different issue. That’s was there an instruction on how to do it.

John Simpkins: Yes.

Mr Beer: I’m asking, in each individual case, did you retain, did you keep, a record of “This is the data that I started with, these were the filters I applied, these are the products that I ended up with that will get passed to the Post Office”?

John Simpkins: I would have to look at the PEAKs to see what we did but I believe that we did feed back to Gerald “These are events that we believe are benign to add to the filter”, but I don’t know what we would have recorded on the PEAK as to which events we have selected out of those 10,000 to filter.

Mr Beer: The email continues:

“According to Gerald Barnes, the way to get the filters changed is:

“‘The events need all to be checked by someone who understands them. Whilst doing this they may well identify certain patterns which they know to be benign. They should then raise a PEAK stating which patterns they consider benign and assign it to the Audit Team. I will then alter our filters to ensure that these events are always filtered out. This seems a little tedious but it has the advantage that we have an audit trail for the reason behind filtering out particular events’.

“Can you cooperate on looking at these event lists and getting the PEAKs raised into audit. Suggest John …”

I think that’s you referred to.

John Simpkins: Mm.

Mr Beer: “… runs the list and Anne add viruses on counter events. If you supply me with the PEAK numbers I’ll get them put through. This is likely to be an iterative process until we can get the events driven down.

“[Sample ARQs attached]. There are some obvious ones on the list that can be knocked off quickly.”

Then if we scroll up the page to page 2, please. I think we see your reply. Your comments:

“The full event text was not included in the sample, most events are probably not [‘worth’, I think that’s meant to say] keeping unless they specify a specific transaction/journal number such that it can be tied back to a financial issue.

“I suggest removing the following events …”

Then there’s a big long list.

You’re saying apply these filters, essentially, to remove events from the ARQ data?

John Simpkins: Correct.

Mr Beer: Top of the page. Mr Parker replies:

“If you agree, let’s get the necessary PEAKs raised …

“I’m concerned that some of the events are not complete (full event text) so unable to classify.”

What does that mean?

John Simpkins: I think I mentioned in the part below that the full event text was not supplied to us. So the event text was truncated in some way when the – it was extracted to us.

Mr Beer: Up the page, please. We can see Ms Chambers’s reply:

“Counter events – I think we should apply the same filters to SYSMAN3 as have already been applied to SYSMAN2 … However I don’t have a list of these. I’m reluctant to put much effort into justifying in this area.”

What was she meaning by that, please?

John Simpkins: SYSMAN2 was the previous version so I’m presuming that she’s saying that there was already a filter set up for the events from the SYSMAN2 product and can that just be brought forwarded to the SYSMAN3 product.

Mr Beer: So, overall, this is a discussion within the SSC as to the filters that are going to be applied to ARQ data –

John Simpkins: Events, yes.

Mr Beer: – yes – to events within ARQ data to reduce the volume?

John Simpkins: Correct.

Mr Beer: How was it established if an event had no known financial impact?

John Simpkins: I think, when I did it, I normally started with the KELs and searched through the events that were in the Excel spreadsheet against the KEL database.

Mr Beer: Isn’t that a bit of a shaky way of doing things? Doesn’t it rely on the accuracy and completeness of the KELs?

John Simpkins: It did give you a good starting point. You could search the PEAKs. There should, in theory, almost have been a KEL for every single event raised. So the KELs, as I said, is a misnomer it’s a knowledge article base, and the SMC, who were the second line support team, whenever they met an event that was not already KEL’d for and not already filtered, they would raise a call for it so there were a lot of knowledge articles all about the events in the data centre.

Mr Beer: That can come down. Thank you.

So, essentially, technical decisions were being taken on what could or could not evidence a problem with financial information. Was input provided by the Post Office on this, to your knowledge?

John Simpkins: Not to my knowledge, no.

Mr Beer: Was this an exercise conducted, therefore, only internally by Fujitsu?

John Simpkins: Yes, I believe so.

Mr Beer: To your knowledge, was the Post Office told the outcome of the exercise, ie what filters had been applied to filter out material that wouldn’t be checked?

John Simpkins: I don’t know.

Mr Beer: To your knowledge, was there such a discussion?

John Simpkins: I don’t know.

Mr Beer: The decision was ultimately taken to use the previous iteration of SYSMAN: SYSMAN2?

John Simpkins: It was mentioned in the email. Whether it was carried forward, I can’t tell you.

Mr Beer: Well, do you know –

John Simpkins: I don’t know.

Mr Beer: The email tended to suggest that we should just use SYSMAN2 and that Anne Chambers was reluctant to put much effort into justifying each additional exclusion.

John Simpkins: That’s what the email said would probably ask Gerald. He would be able to tell you what filters were applied.

Mr Beer: Mr Parker had suggested that conducting some new checks would be helpful, hadn’t he?

John Simpkins: Yes.

Mr Beer: To your knowledge was that done?

John Simpkins: We did add new checks – sorry, we added – I mean we added information back to Gerald about the new events, yes.

Mr Beer: Can we turn, please, to FUJ00228917. This is an email exchange of a year later. You’ll see that it again involves you. It’s a one-page email exchange. It’s quite difficult to ascertain what’s happening, certainly to an outsider, but, if we look at the bottom of the page, please, an email from John Rogers, who is described as the test lead for Horizon Online. What’s “LST”?

John Simpkins: Live service test or system test.

Mr Beer: The subject line is “ARQ retrieval format inadequate for support use”:


“This new functionality is under test …

“Have you seen the new spreadsheet that is produced?

“… are you happy with the format?

“If not would you like to see an example?”

Up the page, please:

“Have you got an example please …”

He copies you in:

“[he has] not seen it at all!

Then at the top of the page, please:

“Attached is a copy of the output events file for two ARQs.

“[One] contains SYSMAN2 data …

“[The other] contains SYSMAN3 data.”

Is this email chain – and this is all we’ve got – evidence of some exploration of how ARQ retrieval could be used by the SSC?

John Simpkins: I would say this is them making a change to the live system and it’s currently in live – live support test – live service test, system test – before it goes live, and they are checking that we are happy with the output of that change to the event spreadsheet.

Mr Beer: You told us earlier this morning that you in the SSC did not use ARQ data for the purposes of your work?

John Simpkins: This is not for our work.

Mr Beer: This is not for your work –

John Simpkins: No.

Mr Beer: – but the area of your work that we’re talking about now is SSC’s involvement in the filtering of ARQ data?

John Simpkins: Yes.

Mr Beer: This is talking about, is this right, under Horizon Online, the filtering of output events –

John Simpkins: Yes.

Mr Beer: – and the SSC being given an opportunity to inform or configure the format of the retrieval to increase its usefulness?

John Simpkins: I presume so, yes.

Mr Beer: Would that be usefulness not to the SSC but usefulness to the end user, the Post Office?

John Simpkins: I imagine it could be the SSC, to help us do the filtering that we are previously doing. I don’t know what the instigation of the change was.

Mr Beer: That was my next question: what was the outcome of this?

John Simpkins: I don’t know.

Mr Beer: Are you aware whether this discussion was ever communicated back to the Post Office, ie “Under Horizon Online there’s an opportunity to change the filtering process of the ARQ data, we’re going to now apply these filters going forwards from January 2011”?

John Simpkins: I don’t know of that. That may be a good one to ask Gerald. Have we looked at the PEAK? It’s got a PEAK reference related to this, 206531.

Mr Beer: If I have it’s presently lost in my memory somewhere but we can do that and maybe do that with Mr Barnes?

John Simpkins: Yeah, it may specify why this change is coming about and what the outcome was.

Mr Beer: Again, at this stage, did you know what the Post Office was using the ARQ data for?

John Simpkins: I don’t believe so. I’m not sure when we would have been aware of – there were prosecutions going on. We did – as I was saying, we did stop doing this and that must have been when we were aware. So the SSC decided we’re not happy doing this filtration if it’s going to be used in court cases, and we stopped.

Mr Beer: Why weren’t you happy?

John Simpkins: Because it – again, leading on from Anne having to give evidence, we thought that if it – we were making the filtered choices, they may want someone to come up and explain exactly why in a court case.

Mr Beer: Why would you be unhappy about doing that?

John Simpkins: I think it was just we did not wish to do that.

Mr Beer: But why?

John Simpkins: Because it’s – I would say that it would be difficult to explain technically every single decision you’ve made out of 10,000 events, why you decided to filter that.

Mr Beer: That document can come down. Thank you.

Why would it be difficult to explain to a court in a statement or in oral evidence why you had made filtering decisions?

John Simpkins: I guess that you would have to refer to documentation, to examples of PEAKs, to examples of KELs for every single one, and we felt that it’s something that just gives them everything.

Mr Beer: Gives the postmasters everything?

John Simpkins: Well, in the ARQ, have all of the events and then, if you wish to ask questions about individual events, we can do that, rather than us filtering them.

Mr Beer: What’s the problem with giving them everything?

John Simpkins: There is no problem giving them anything. We were helping to do the filtering, now we’ve made a decision not to do the filtering and then you can ask about individual events instead.

Mr Beer: So have I understood you correctly to say that there came a realisation within the SSC, a point in time at which you realised the use to which your product was being put?

John Simpkins: Which to our filtering has been put is a good way of putting it, and we decided that that’s not what we want to do.

Mr Beer: When was that?

John Simpkins: I cannot tell you, I’m afraid. I could probably try and find out by talking to the Security Team.

Mr Beer: It must have been after January 2011?

John Simpkins: Indeed, otherwise that email wouldn’t have been sent.

Mr Beer: Who made the decision?

John Simpkins: I think it was the SSC Team Leaders and the SSC Team Manager.

Mr Beer: So Mr Parker?

John Simpkins: I think that the SSC Team Leaders pushed with Mr Parker agreeing.

Mr Beer: Why did the Team Leaders have to push Mr Parker to agree?

John Simpkins: I think it’s just we were doing this process and then suddenly there’s this realisation to say “Can we not do this process?”

Mr Beer: Does it follow that, before the SSC sort of downed tools on this aspect of its work, none of you had been asked to explain in any formal way, to either the Security Team or to the Post Office, what you were doing and what filtering had occurred?

John Simpkins: I imagine you’re correct, yes. I cannot recall having done that and I don’t know the latter bit about which filtering has occurred. That is probably because you had the filtering in already as well. So I don’t know about that part.

Mr Beer: Would you understand that, if a court is presented with a set of data, it would want to know what has been done and each of the steps that have been taken to produce that set of data?

John Simpkins: Totally.

Mr Beer: That was, is this right, what led the SSC to down tools, as I’ve described it?

John Simpkins: Yes, I think that’s fair.

Mr Beer: Was that just a reluctance to be dragged into or become involved in court proceedings or was it because of difficulties in explaining the nature of the exercise that you were undertaking?

John Simpkins: I imagine it was a – I would say it’s a partial both but I would say that it made a lot more sense to give them the full events than to give them a filtered version.

Mr Beer: Can I just try and understand what you’ve just said there. What you’ve told us in your second witness statement, that the filtered ARQ information that was provided to subpostmasters does not contain sufficient information for the postmaster to assess the health of the Horizon system as it affected their branch, correct?

John Simpkins: Correct, but when I made that statement I was looking at events and transactions. I was not thinking about events – sorry, counter events, not Windows and operating system events, which is what we’re talking about now.

There was – in that witness statement, there were two pieces of evidence shown to me: one were counter events, ie logon, logoffs, things like that; and the other one was transactions.

Mr Beer: Just to try to sum up this part of your evidence: was it “We don’t want any involvement with the SSC in court proceedings after what happened to Anne”; was it, “We’re unhappy about the exercise we’re being asked to undertake and we wouldn’t want that being explored in court”; or “We know that there’s more information that could be revealed to subpostmasters to show the health of the system”?

John Simpkins: I think it was partially the first, Anne, and then also partially that it is a manual process and that you can obviously make mistakes.

Mr Beer: So does that mean you wouldn’t want your homework subject to scrutiny in a court?

John Simpkins: No, I’m happy to have my homework scrutiny’d in a law of court (sic), and I could go through and explain the reason why for each of them but would you be hauled over the coals if you had made a mistake or if an event that was, according to a KEL, not financially impacting, later on becomes financially affecting because there’s been a change?

Mr Beer: You and your colleagues must have been sufficiently concerned that that was a realistic possibility to include that in your reasoning for not wishing to do it?

John Simpkins: Correct.

Mr Beer: Can I turn lastly on this topic, please, to FUJ00225729. This series of emails, again involving you, this is October 2010, concerns the investigation of an issue of system integrity at the Ferndown sub post office. Ms Penny Thomas asked for an investigation to be undertaken and you become involved in it.

Can we start with page 3 of the email chain, please, which is the originating email, an email from Emma Langfield, the Line Service Team within Post Office, to Mr Thompson, copied to Ms Thomas and David Hulbert:

“… I hope today’s meeting … proved to be beneficial.

“My apologies for the late notification of the following but I am hoping that you will be able to assist in a rapid turnaround for an ARQ request.

“Our Security Team, who forward ARQ requests to yourselves for extraction … have this afternoon sent an emergency ARQ to Penny’s team for processing. This has come from Lynn Hobbs, Branch Network Manager, which in turn was passed into Lynn by Paula Vennells, Post Office Limited Network Director.

“This request is a data extract for the above branch from 1 September 2009 to 30 September 2010. I understand from Mark Dinsdale that the agreed turnaround for ARQ requests is 7-14 working days, but the ARQ above … is a business priority.

“Given the resource at your disposal and your team’s … workload is there any way that the 12-month extract can be completed [by] Monday, 4 October …”

I think this is being sent at 5.48 on the Friday:

“We have Helen Rose [Post Office Limited] on standby to decipher the data and this will be her priority when received, but we need to feed back a delivery date and time to Mark, Lynn and Paula.”

Firstly, had you any awareness of Helen Rose within the Post Office?

John Simpkins: No.

Mr Beer: Did you have any dealings with her or any understanding of her competency to decipher ARQ data?

John Simpkins: No.

Mr Beer: Did any members of the SSC, to your knowledge, give training or assistance to anyone within the Post Office on deciphering ARQ data?

John Simpkins: Not to my knowledge.

Mr Beer: Thank you. If we can scroll up, please, Penny Thomas to Peter Thompson, copied to Donna Munro:

“We are looking at a request for 13 months of data received at 4.30 on Friday afternoon. It is not possible to return this request today.

“I will provide an update with an estimated return time frame later in the day.”

Further up the page, please:

“Can you inform the customer of the perceived time scales at this initial stage just to provide some perspective of time scales.”

Penny Thomas:

“I can’t do more until we identify the size of outlet and number to counters.”

Up the page, please, it gets sent on to Steve Parker:

“Please see [the] string. A forewarning that we will be sending SSC thirteen months worth of events for this outlet … Would SSC please be able to review and return comments asap?”

Further up the page:

“… Steve is [out of office] – I’m not sure who to forward this to – but this really urgent …”

You, Anne Chambers and some others get copied in, yes?

John Simpkins: Yes.

Mr Beer: Further up the page, you reply:

“Of course we can look at the provided data but it will take some time to trawl through the potential number of events.

“The comment in the trail below ‘We have Helen Rose on standby to decipher the data and this will be her priority …’ implies that they would like to do the trawl themselves.”

What did you mean by that?

John Simpkins: It means that they wanted to go through the events themselves.

Mr Beer: Ie they didn’t want analysis by –

John Simpkins: They didn’t want the filtered –

Mr Beer: The filters by you?

John Simpkins: Correct.

Mr Beer: Then further up the page, please, Ms Thomas says:

“We, as a matter of course, check all system events before returning transaction records to [the Post Office].”

Is that the exercise that you have just described to us?

John Simpkins: Yes.

Mr Beer: “Their trawl is to do with transaction records …”

So she’s essentially saying “No, you still need to do the filtering first”.

John Simpkins: Correct.

Mr Beer: Is that right?

John Simpkins: Yes.

Mr Beer: “Their trawl is to do with transaction records, which, I’m sure you’re aware is a totally different kettle of fish.”

Then further up the page the PEAK numbers are set out and:

“… there is a lot of senior management focus on this request from both Fujitsu and [Post Office] so [please treat it] as a priority.”

So is this just another example of the SSC being asked to review ARQ data and filter it?

John Simpkins: The events, yes.

Mr Beer: Yes, the events, albeit this is on the hurry-up?

John Simpkins: Yes.

Mr Beer: So the request that’s included in that email is something that the SSC was undertaking routinely, analysing events data and filtering it, in any event?

John Simpkins: Correct.

Mr Beer: So we’ve got to the situation, is this right, Mr Simpkins, where you say, in order to look at the health of the system from a postmaster’s perspective, you would not use the filtered ARQ data to do so?

John Simpkins: If I was taking a call from a postmaster, correct.

Mr Beer: But then what’s provided to the Post Office is the filtered ARQ data?

John Simpkins: Correct.

Mr Beer: Were those two worlds ever compared with each other: I wouldn’t look at that data if I wanted properly to investigate the health of the system; I’m going to provide that data to the Post Office?

John Simpkins: I don’t know about how the ARQ process was designed or created or agreed with the Post Office. The filtering of the events is effectively something I would be doing. If a call came in for a subpostmaster, I would go through the events that’s happened in the data centre to see if I could see anything that may have affected them.

So that filtering of events is kind of something I would do, if I had a call from a subpostmaster. The ARQ – the selection of attributes to return to the subpostmaster, I don’t know how that got agreed.

Mr Beer: But you just did it because it was part of your function?

John Simpkins: No, sorry, the events bit, we’re talking about the events part, that is something I would do anyway if I had a normal call. So that’s totally – I’m totally happy with that. The other parts, the transactional part and the message store filtering, we didn’t do, and I don’t know how that got agreed between Fujitsu and the Post Office about how – what form they wanted that ARQ in.

Mr Beer: Got it, understood.

Can we turn to a new topic, please, which is remote access. Can we start by looking at FUJ00088036. Now, I asked you some detailed questions about this on the last occasion, so I’m not going to go over all of what you said but I just want to refresh in our minds what you said about it, please. So this is an outline of the secure support system of 2 August 2002; can you see that?

John Simpkins: Yes.

Mr Beer: If we look at further down on page 1, we can see that one of the approval authorities is Mr Peach, the SSC Manager, right at the foot of the page, yes?

John Simpkins: Yes.

Mr Beer: Then, over the page to the second page, in the second box down, we can see that reviewers included Mr Peach – just scroll down a little bit please, thank you – we can see that reviewers included Mr Peach and Mr Parker; can you see that?

John Simpkins: Yes.

Mr Beer: If we can go, please, to page 13, the document describes some “Areas of Concern” at 4.1:

“There are two major areas of concern with the current support processes:

“1. Second line support does not have the tools necessary to perform their function …”

Then this:

“2. Third line and operational support organisations’ access to the live system is not fully audited and in some cases is unrestricted in the actions that can be carried out …”

That’s describing that second point there, the position in the SSC; is that right?

John Simpkins: Yes, at that time, yes.

Mr Beer: Then at 4.1.2, if we just scroll down a little bit:

“Third line support staff receive repeat instances of calls that should have been filtered out by second line …

“The current … practices were developed on a needs must basis; third line support diagnosticians had no alternative other than to adopt the approach taken given the needs to support the deployed Horizon solution.

“The consequences of limited audit and system … access afforded to third line support staff provide the opportunity to:

“Commit fraudulent acts;

“Maliciously or inadvertently affect the stability of the new Network banking and Debit Card online services;

“In addition a complete audit would allow Pathway to defend SSC against accusations of fraud or misuse.”

Then on to page 15, please.

John Simpkins: I did also comment on this last time and say I don’t agree with the “commit fraud” on that, when I was last here.

Mr Beer: Yes.

John Simpkins: Okay.

Mr Beer: 4.3.2, at the top of the page, please, describing third line support:

“All support access to the Horizon systems is from physically secure areas. Individuals … in the support process undergo more frequent accurate vetting checks. Other than the above[,] controls are vested in manual procedures, requiring managerial sign-off controlling access to post office counters where update of data is required. Otherwise third line support has:

“Unrestricted and unaudited privileged access (system admin) to all systems including post office counter PCs;

“The ability to distribute diagnostic information outside of the secure environment; this information can include personal data … business sensitive data and cryptographic … information.

“The current support practices were developed on a needs must basis; third line support diagnosticians had no alternative other than to adopt the approach taken given the need to support the deployed Horizon solution.

“There are no automatic controls in place to audit and restrict user access. This exposes Fujitsu … to the following potential risks:

“Opportunity for financial fraud;

“Operational risk – errors as a result of the manual actions causing loss of service to outlets;

“Infringements of the Data Protection Act.”

Is what it described there in paragraphs 4.3.2 accurate as representing the position in August 2002?

John Simpkins: I don’t agree with the opportunity for financial fraud. Otherwise – oh, and the cryptographic key information, we didn’t have access to.

Mr Beer: Is, overall, what is being described here the facility for third line support to have remote access to the Horizon system?

John Simpkins: We had remote access to the live system.

Mr Beer: It includes that this is unrestricted and unaudited access; is that accurate?

John Simpkins: There were definitely events written whenever we connected. So at this time, we used some software called Rclient to connect.

Mr Beer: Capital R, capital C (sic), client?

John Simpkins: Yeah. Sorry, yes, Rclient, and it would have written a Windows event when we had written to the counters or data centres. It would have also – before we got there, as I said, we connected to the data centre and that would have also been audited as well. So it’s not unaudited but I don’t believe it would show you who connected, which person.

Mr Beer: When you say it’s not unaudited, it’s not unauditable; is that right?

John Simpkins: It says “unaudited privileged access”.

Mr Beer: Yes, but what you’ve just described is that the situation was that it could have been audited?

John Simpkins: Yes.

Mr Beer: Was it, in fact, audited?

John Simpkins: I don’t know. We –

Mr Beer: Was it unrestricted?

John Simpkins: Yes, I believe we had admin access, which is effectively the highest level.

Mr Beer: Do you know why Mr Peach would have authorised the issuing of this document; Mr Peach and Mr Parker would have reviewed the document and let things remain in it that, in your view, are not accurate?

John Simpkins: I can’t comment about that, I’m afraid. I can just give you mine.

Mr Beer: Sorry?

John Simpkins: I cannot comment about their review process. I can just give you mine.

Mr Beer: For how long did what’s described in this document remain the position after August 2002?

John Simpkins: I did a little research when I saw this in my pack and I found a PEAK that said in – was defining in July 2003 the new SSH – OpenSSH was being used and there was a peak on an issue the SSC had with it so we were definitely using it by July 2003.

Mr Beer: What was the new system?

John Simpkins: So the new system used something called OpenSSH and it allowed us to log every single key press that the third line support person made when connecting down to the counter.

Mr Beer: When you say it allowed you to –

John Simpkins: Sorry, it was.

Mr Beer: It was.

John Simpkins: The software was recording every single key press to an auditable file.

Mr Beer: So that was in place from at least July 2003?

John Simpkins: Yeah, I’m sure you could probably find out the – once you know what release package that went under, you should be able to find out the exact date. But, as I say, I found that PEAK and so I know it was working from July 2003.

Mr Beer: Was it recognised within the SSC at this time that the privileged access that the 25-odd members of the SSC had was an uncomfortable position to be in?

John Simpkins: Probably when it was pointed out because support wouldn’t know what operating system logging and everything else around us was in place. We were told, “How can you connect to the counter? This is how you connect to the counter, this is how you do your job”. When it was pointed out, I imagine, yes, they would be uncomfortable with it. The new version gave us better wrapper around our commands, so we actually had more facilities with the new OpenSSH, we had a Cygwin shell down there, which we connected to, and it was nice enough for support to use, overall.

Mr Beer: Can we move forwarded eight years or so, until October 2010, and look at POL00117863, please.

This is a document that isn’t dated but, from other evidence, looks to have been created for the purposes of a meeting held at the beginning of October 2010. We can see that there are four Fujitsu employees attending or proposed to attend the meeting, and they included you; can you see that?

John Simpkins: Yes, I presume this was created by Post Office as they got my role incorrect.

Mr Beer: Because it records you as being a member of Security?

John Simpkins: Correct.

Mr Beer: Unfortunately, the document is not authored and, as I’ve said, not dated. Did you, in fact, attend this meeting?

John Simpkins: I can’t remember.

Mr Beer: Let’s look at what is recorded, and this seems to be a note prepared for the purposes of a meeting, rather than a record of the meeting.

John Simpkins: Okay.

Mr Beer: “What’s the issue?”

The Chairman is very familiar with this document. I want to use it for a purpose different than that which it is usually used for.

“What’s the issue?

“Discrepancies showing at the Horizon counter disappear when the branch follows certain process steps, but still show within the back end branch account … currently impacting [around] 40 branches since migration onto Horizon Online, with an overall cash value of [around] £20,000 loss. This issue will only occur if a branch cancels the completion of the trading period, but within the same session continues to roll into a new balance period.”

So, overall, would you agree that that describes what is known as the receipts and payments mismatch bug?

John Simpkins: Yes, I commented on this last time I was here, as well. One of my –

Mr Beer: I don’t think you commented on this document last time?

John Simpkins: No, I did comment on the receipts and payments, I think it was when the Core representatives asked me questions.

Mr Beer: Yes.

John Simpkins: One clarification I made at that time was that this is visible to the subpostmaster. So it’s visible on the balance report they print out, because there’s a difference between receipts and payments, and it’s also visible when they roll the branch trading statement because they will get a non-zero trading position, and that seems to have not been picked up from my last – when I was last here, because it’s been referred to as they can’t see this.

I’ve got some PEAKs where I can demonstrate the subpostmaster got that non-zero trading position, rang in, and we’re told it’s related to this, and that it’s with Post Office and they know about it.

Mr Beer: If we go to page 2, please, and look a third of the way down, the part that I think is emboldened:

“Note the branch will not get a note from the system to say that there is Receipts and Payment mismatch, therefore they will believe they have balanced correctly.”

Is that accurate.

John Simpkins: Yes, at a point in you time. So what happens is you are rolling your stock unit, you go to roll your stock unit and you get a discrepancy warning. You cancel, you go back to the previous screen. Then you carry on anyway and it’s lost that discrepancy and you have a receipts and payments mismatch, but because you’re passed the trial balance, it doesn’t tell you. That’s this point.

It does print it out. Later you will roll your branch, which is all the stock units added together. That takes all the stock units and realises that they don’t add up. That’s when you get the non-zero trading position error reported to the subpostmaster.

Mr Beer: So they’ll see there’s an error but they won’t know the cause of it?

John Simpkins: They will see the receipts – it’s basically telling you the receipts and payments for all your stock units do not add up to zero.

Mr Beer: Moving down to the “Impact”:

“The branch has appeared to have balanced, whereas in fact they could have a loss or a gain …”


John Simpkins: Correct.

Mr Beer: “Our accounting systems will be out of sync with what’s recorded at the branch …”


John Simpkins: That’s Post Office’s side. I believe that’s correct. I couldn’t tell you.

Mr Beer: “If widely known could cause a loss of confidence in the Horizon system by branches.

“Potential impact on ongoing legal cases where the branches are disputing the integrity of Horizon data.

“It could provide branches ammunition to blame Horizon for future discrepancies.”

Were these concerns of yours, these last three?

John Simpkins: No.

Mr Beer: They were concerns of other people at the meeting, were they, presumably?

John Simpkins: Presumably whoever called the meeting, yes.

Mr Beer: Over the page, please, top of the page:

“The Receipts and Payment mismatch will result in an error code being generated which will allow Fujitsu to isolate branches affected by this problem, although this is not seen by the branches.”


John Simpkins: As I say, again, it’s twice it tells them. Once on the stock unit balance report, it’s got the mismatch on that report, and when they roll the branch it will tell them.

Mr Beer: So that’s inaccurate?

John Simpkins: Yes.

Mr Beer: “We [that tends to suggest this was written by the Post Office] have asked Fujitsu why it has taken so long to react to and escalate an issue which began in May”, they’re going to get back to the Post Office.

“Fujitsu are writing a code fix which will stop the discrepancy disappearing from Horizon in the future. They are aiming to deliver this into test week [of] 4 October …

“The code fix will stop the issue occurring in the future but it will not fix any current mismatch at branch.”


John Simpkins: Yes.

Mr Beer: “Proposal for affected Branches”, if we go down, please, and look at solutions 1, 2 and 3:

“There are three potential solutions to apply to the impacted branches.”

The recommendation is that 2 should be adopted:

“SOLUTION ONE – Alter the Horizon branch figure at the counter to show the discrepancy. Fujitsu would have to manually write an entry value to the local branch account.”

Under “Risk”:

“This has significant data integrity concerns and could lead to questions of ‘tampering’ with the branch system and could generate questions around how the discrepancy was caused. This solution could have moral implications of Post Office changing branch data without informing the branch.”

So does that reflect the fact that, at this time, in October 2010, Fujitsu had the ability to manually write entries into local branch accounts and that would not be visible to the subpostmaster?

John Simpkins: So this is HNG-X, so this is the branch database. So Fujitsu could make up entries to the branch database.

Mr Beer: Without the subpostmaster knowing about it?

John Simpkins: Yes.

Mr Beer: So, in that sense, it would be covert, wouldn’t it?

John Simpkins: If you don’t tell someone about it, I guess that is –

Mr Beer: Yes, it would be completely invisible to the subpostmaster that Fujitsu had been inserting values into their accounts?

John Simpkins: Yes, it could be invisible, if they – it does say that the – if they’ve already rolled, then there’s going to be – they would have already known that there was issues but, yes, you could not see potentially us inserting an update into a database, that is totally separate from the counter.

Mr Beer: So, as at October 2010, Fujitsu retained the facility of remote access to write entry values to local branch accounts covertly without a subpostmaster knowing?

John Simpkins: It’s not remote access. We’re in the data centre. So it’s the branch database where this change will take place in HNG-X.

Mr Beer: But, nonetheless, the facility to write entries into accounts which have the effect of changing financial information covertly, ie without the subpostmaster knowing it has even occurred?

John Simpkins: Yes.

Mr Beer: So I’m not looking about whether it was done and whether it was revealed or not, I’m just saying this is a record for a different purpose of showing that that facility remained?

John Simpkins: It is a different facility, obviously, because we’re now talking about a database update in the branch database and we’re not talking about accessing the counter at all. So the counter’s records are now held centrally and we are talking about updating it in the branch database but, yes, that is a – there is the possibility of a database update and, if you don’t communicate that to the subpostmaster, you’re making a database update, then that is correct.

Mr Beer: There’s nothing else I want to ask about this document – we have trawled over it with other people a lot – other than which solution, 1, 2 or 3, was adopted?

John Simpkins: I don’t believe I – we definitely didn’t do any updates to the database, so I don’t know which option. I think “Don’t do anything” was probably – whether they updated the POLSAP systems, I can’t tell you. That’s Post Office.

Mr Beer: So you can’t recall out of solutions 2 and 3 which was adopted?

John Simpkins: No.

Mr Beer: Thank you. Can we move on, please, to POL00029791. This is a document that we think dates from 2014, if we just go to page 10, please. We can see that the facility has been used to record who made the changes and the dates that they made them. Can you see that?

John Simpkins: Yes.

Mr Beer: Hence why I’m suggesting that it’s 2014, so in fact December 2014. Back to page 1, please. It’s part of the review and mediation scheme, correspondence, essentially, between the Post Office and Second Sight. The document records that Second Sight has asked:

“Can Post Office or Fujitsu edit transaction data without the knowledge of a subpostmaster?”

Then, if we go to the foot of the page, please:

“This document has been prepared with the assistance of Fujitsu and the Post Office IT&C team. Both have approved the document as being accurate.”

Were you part of the group of people from Fujitsu who helped to prepare the document?

John Simpkins: No, I don’t believe so.

Mr Beer: Have you ever seen the document before, other than in preparation for this case –

John Simpkins: No.

Mr Beer: – for this Inquiry?

John Simpkins: No.

Mr Beer: Just go back to the top of the page, please:

“Phrasing the question in this way [that’s ‘Can Post Office remotely access Horizon?’] does not address the issue that is of concern to Second Sight and Applicants. It refers generically to ‘Horizon’ but more particularly is about the transaction data recorded by Horizon. Also, the word ‘access’ means the ability to read transaction data without editing it – Post Office/Fujitsu has always been able to access transaction data however it is the alleged capacity of Post Office/Fujitsu to edit transaction data that appears to be of concern … it has always been known that Post Office can post additional correcting transactions to a branch’s accounts in ways that are visible to subpostmasters (ie [TCs and TAs]) – it is the potential for any hidden method of editing data that is of concern.

“In the light of these issues, Second Sight and Post Office have therefore agreed the above reformulation of the question to be addressed”, ie can Post Office and Fujitsu edit transaction data without the knowledge of a subpostmaster?

If you had been asked that question “Can Post Office or Fujitsu edit transaction data without the knowledge of a subpostmaster”, your answer would be yes, wouldn’t it?

John Simpkins: I would say Fujitsu would be able to without the correct controls.

Mr Beer: Fujitsu could but Post Office can’t?

John Simpkins: I can’t see how Post Office can.

Mr Beer: Yet the answers given:

“In summary, Post Office confirms that neither it nor Fujitsu can edit transaction data without the knowledge of a subpostmaster.”

That’s just wrong, isn’t it?

John Simpkins: This is HNG-X, so, yes, it is possible with the DBA or sufficient access to a database to update the database.

Mr Beer: So just to answer my question, that sentence, “In summary, neither Post Office nor Fujitsu can edit transaction data without the knowledge of a subpostmaster” is wrong, isn’t it?

John Simpkins: I believe so.

Mr Beer: Over the page, please, to page 2. Just under the bullet points next to edit 9, a sentence which begins, “There is no functionality”; can you see that? Thank you:

“There is no functionality in Horizon for either a branch, Post Office or Fujitsu to edit, manipulate or remove a transaction once it has been recorded in a branch’s accounts.”

That’s wrong as well, isn’t it, insofar as it concerns Fujitsu?

John Simpkins: Yeah, it’s the – the basic functionality. We did have the branch – sorry, the transaction correction tool, which we used once, and I would call that functionality in Horizon. The bit – the fact that it is a database and someone, a DBA could have access to it, is not functionality in Horizon, if that makes sense.

Mr Beer: So, overall, if you had seen that sentence, you would have said that is incorrect?

John Simpkins: The functionality – the basic functionality is that is correct, you can only add using the correction tool. As a DBA, you could have access to a database –

Mr Beer: Thank you.

John Simpkins: – and update it.

Mr Beer: That can come down, thank you. I think it’s right that you didn’t give evidence about remote access or any evidence in the Group Litigation Order proceedings in the High Court; is that right?

John Simpkins: That’s correct.

Mr Beer: But you told us on the last occasion that you and other colleagues in the SSC provided information to the solicitors, as you said?

John Simpkins: That’s correct.

Mr Beer: Was that the solicitors for the Post Office?

John Simpkins: Yes.

Mr Beer: What information did you provide the solicitors to the Post Office?

John Simpkins: We were asked many questions, I believe, mainly about PEAKs and about KELs, about how the system worked. We –

Mr Beer: Did you openly discuss the existence of KELs with the solicitors for the Post Office?

John Simpkins: I wrote a program to export them all to files so they could have a copy.

Mr Beer: Why were you providing information to the solicitors for the Post Office in the Group Litigation proceedings?

John Simpkins: Probably two reasons: I wrote PEAK and I maintain the SSC website with the KELs, the web constructions, et cetera, so therefore I’m the person to export from those. We are part of SSC and, therefore, a technical unit with the knowledge of how the system works, and Steve was giving witness evidence and –

Mr Beer: Was there a stage when you were going to be used as a witness?

John Simpkins: I was asked if I would like to and I declined.

Mr Beer: You declined. Why did you decline?

John Simpkins: I didn’t want to.

Mr Beer: Why not?

John Simpkins: They actually asked me if I would like to and I said no.

Mr Beer: Why didn’t you want to give evidence?

John Simpkins: I was not comfortable giving evidence.

Mr Beer: Why were you uncomfortable?

John Simpkins: Because it’s not in my skillset to give evidence.

Mr Beer: Or was it the substance of the evidence that you might give?

John Simpkins: No, I’m happy with lots of questions and answering questions, that’s my daily role. I’m more than happy to do that. I don’t like the environment.

Mr Beer: You told us on the last occasion you were aware of a discussion at the time of the Group Litigation about the suitability of Gareth Jenkins as a witness. Was that to his suitability to give evidence as a witness in the Group Litigation?

John Simpkins: No, I don’t think I commented on his suitability.

Mr Beer: Was that, therefore, a discussion about his past suitability as a witness?

John Simpkins: Yes, I think – sorry, could you go through the question again?

Mr Beer: Yes. You told us on the last occasion that you were aware of discussion at the time of the Group Litigation about the suitability of Gareth Jenkins as a witness and I’m asking: is that his suitability as a witness to give evidence in the Group Litigation or his past suitability as a witness to give evidence in other proceedings?

John Simpkins: No, he’s definitely more than capable of giving evidence. He knows his subject extremely well. That was I think in reference to a document I’d seen about the Post Office talking about his suitability.

Mr Beer: So was there a discussion in the run-up to the Group Litigation trials about Mr Jenkins’ suitability to give evidence as a witness?

John Simpkins: I can’t recall that. He’s more than – he’d be absolutely fine doing that, from what I know of him.

Mr Beer: What, therefore, was the discussion about, then?

John Simpkins: Um –

Sir Wyn Williams: I’m sorry to interrupt but while Mr Simpkins is thinking, I haven’t got him on the screen.

Now, I have. Thank you.

Mr Beer: Thank you. What was the discussion about, then?

John Simpkins: I cannot recall what the discussion was about because he would be the perfect person to give evidence.

Mr Beer: Why did Mr Parker end up giving evidence about, amongst other things, remote access and not the “perfect person”, Mr Jenkins?

John Simpkins: Mr Jenkins is the architect. Mr Parker is the Support Manager. I presume he was told to put it in his witness statement.

Mr Beer: Told by who?

John Simpkins: I would say the Post Office lawyers.

Mr Beer: So who was this discussion between, about the suitability of Gareth Jenkins as a witness?

John Simpkins: I’m not totally sure.

Mr Beer: What was the outcome of the discussion, that he should give evidence or shouldn’t give evidence?

John Simpkins: I always think Mr Jenkins should give evidence. He knows –

Mr Beer: Do you know why he didn’t give evidence in the Group Litigation?

John Simpkins: No, you would have to ask.

Mr Beer: Did you contribute to the drafting of Mr Parker’s witness statements?

John Simpkins: Yes.

Mr Beer: Why did you contribute to the drafting of Mr Parker’s witness statements to the High Court?

John Simpkins: Because he asked me to.

Mr Beer: Did you provide comments or instructions to the Post Office solicitors on the evidence that Richard Roll, a whistle-blower, had given about the facility of Fujitsu to have remote access?

John Simpkins: I almost certainly provided comments. I think I provided comments on several witness statements.

Mr Beer: Why was it then that Mr Parker was the witness who was selected to give evidence?

John Simpkins: I expect he provided comments as well.

Mr Beer: Can we look, please, at FUJ00083835. This is the first of Mr Parker’s witness statements to the High Court. You’ll see there are some uncontroversial introductory remarks and, on page 2, at paragraph 8, he begins a section of his statement commenting on Mr Richard Roll’s witness statement, dated 11 July 2016.

Paragraph 9., a further description of essentially the difference between Legacy and Horizon Online.

Then paragraph 10, please, comments on Mr Roll’s work.

Then, over the page, please, to paragraph 11:

“In his statement Mr Roll suggests that there were frequent instances of software problems in Horizon that had an impact on branch transaction data and which Fujitsu resolved ‘remotely’ (ie not in a branch), not merely by changing software but also by frequently changing branch transaction data (by injecting new transaction data and by editing or deleting existing transaction data), without informing branches that such actions were being taken … those suggestions are incorrect and Mr Roll’s account … is inaccurate and misleading.”

Did you contribute to the drafting of that paragraph?

John Simpkins: No, but I agree with it.

Mr Beer: You agree with what is said?

John Simpkins: I agree that we didn’t make frequent changes. I went through the ACPs and OCRs that we used to record such things and I think in 10 years I’ve found evidence of 28 financial remote changes, and I also disagree that we didn’t tell the subpostmasters. I’ve only ever seen one PEAK where I think that that was mentioned.

Mr Beer: Forward to paragraph 16, please. Mr Parker says:

“It was (and is) theoretically possible for there to be a software problem which could cause a financial impact in branches, but this was (and is) extremely rare and Horizon’s countermeasures were (and are) very likely to pick such matters up. In my experience, these problems have always represented a very small proportion of issues which led to software changes and a very small proportion of the overall issues dealt with by the SSC.”

Did you contribute to the drafting of that paragraph?

John Simpkins: No.

Mr Beer: Was it only theoretically possible for software problems to cause financial impacts in branches?

John Simpkins: No, we had evidence through the PEAKs.

Mr Beer: So it wasn’t just theoretically possible, it had actually happened?

John Simpkins: Correct.

Mr Beer: Page 4, paragraph 18, please:

“In Legacy Horizon it was possible for the data in a particular counter in a branch to become inconsistent with replicated copies, and Mr Roll appears to be describing this in paragraph 17 of his statement. In that situation there could be remote management by Fujitsu to correct the problem, but branch transaction data was not changed in any way. As explained … below, the workaround involved replicating the correct data from the counter in the affected branch or from the data centre copy.”

Did you contribute to the drafting of that paragraph?

John Simpkins: No.

Mr Beer: Is what is said in the second sentence, “there could be remote management by Fujitsu but branch transaction data was not changed in any way”, accurate or inaccurate?

John Simpkins: For – I think we’re talking about marooned transactions here, which was what we covered in my witness statement 3, and you would not change the data that you recover from a marooned transaction, apart from making it so it doesn’t clash with any new transactions entered.

Mr Beer: Paragraph 19, please:

“The suggestion that Fujitsu edited or deleted transaction data is not correct. In Legacy Horizon it was not possible to delete or edit messages that had been committed to the message store.”

Did you contribute to the drafting of that paragraph?

John Simpkins: I don’t believe so, no.

Mr Beer: Is what is said in the first sentence there accurate or inaccurate?

John Simpkins: That is accurate. Once it’s been inserted and replicated, then you don’t – cannot edit. You only add.

Mr Beer: At paragraph 20, please – in fact, we needn’t go on to paragraph 20.

Do you know that Mr Parker made a second witness statement in which he climbed down from some of the things that he said in his first?

John Simpkins: I believe he did make a second, I can’t remember what was in it.

Mr Beer: Well, in particular – given the constraints of time, I’m not going to go through it all with you – he says that in his witness statement, Mr Roll describes a process by which transactions could be inserted via an individual branch counter by using the correspondence server to piggyback through the gateway. That’s a correct description of a form of remote access, isn’t it?

John Simpkins: Yes, because, once you’ve inserted the message into the correspondence server, it will be replicated down to the counter.

Mr Beer: Do you know why that did not appear in Mr Parker’s evidence to the court in his first witness statement?

John Simpkins: No, I don’t.

Mr Beer: Were you providing instructions and information to Mr Parker on which he made his witness statements 1 and 2?

John Simpkins: I definitely commented. He emailed me and asked me for comments, so I definitely commented. I wouldn’t say I provided instructions. I would never instruct him.

Mr Beer: Do you know why Mr Parker did not mention this form of remote access in his first witness statement?

John Simpkins: No, I don’t.

Mr Beer: Was that the subject of discussion with you?

John Simpkins: I don’t know, actually, whether I commented on it during one of my comments on his witness statement but, I’m sorry, I could not tell you.

Mr Beer: Do you know why a witness statement that was addressing the topic of remote access did not volunteer this form of remote access that was available to Fujitsu at all?

John Simpkins: No, I don’t know.

Mr Beer: Yes, thank you.

Sir, those are the questions that I would wish to ask.

Sir Wyn Williams: Are there questions from Core Participants?

Mr Beer: I’m just looking for a third shake of the head and a fourth.

No. No, there aren’t, sir.

Sir Wyn Williams: So that completes the questioning?

Mr Beer: Yes, it does, sir.

Sir Wyn Williams: Thank you, Mr Simpkins, for returning to the Inquiry, for providing two further witness statements and for answering Mr Beer’s questions this morning and early this afternoon. I’m grateful to you.

The Witness: Thank you, sir.

Mr Beer: Sir, might we adjourn until 2.05, please.

Sir Wyn Williams: Certainly, yes.

Mr Beer: Thank you very much.

(1.05 pm)

(The Short Adjournment)

(2.08 pm)

Ms Price: Good afternoon, sir, can you see and hear us?

Sir Wyn Williams: Yes, thank you very much.

Ms Price: May we please call Mr Barnes.

Gerald Barnes


Questioned by Ms Price

Ms Price: Could you confirm your full name, please, Mr Barnes?

Gerald Barnes: Mr Gerald James Barnes.

Ms Price: Thank you for coming to the Inquiry to assist it in its work. As you know, I will be asking you questions on behalf of the Inquiry. You should have in front of you hard copies of two witness statements in your name, in a bundle. The first is at tab A of that bundle and is dated 30 August 2023. If you could turn, please, to page 23 of that, please.

Gerald Barnes: Right, yes.

Ms Price: Do you have a copy with a visible signature?

Gerald Barnes: Yes, I do, yes.

Ms Price: Is that your signature?

Gerald Barnes: It is my signature, yes.

Ms Price: The second statement is at tab A2 of that bundle and is dated 19 December 2023. If you could turn to page 13 of that statement, please.

Gerald Barnes: Right, yes.

Ms Price: Is there also a visible signature on that copy?

Gerald Barnes: There is, yes.

Ms Price: Is that your signature?

Gerald Barnes: It is my signature, yes.

Ms Price: Are the contents of your statements true to the best of your knowledge and belief?

Gerald Barnes: Yes, they are, yes.

Ms Price: For the purposes of the transcript, the references for Mr Barnes’ first statement is WITN09870100 and the reference for the second statement is WITN09870200.

Mr Barnes, I will not be asking you about every aspect of the statements that you have provided, which will be provided and published on the Inquiry website in due course. I will instead be asking about certain specific issues which are addressed in them.

Starting, please, with the relevant roles you have held with Fujitsu Services Limited over the years you have spent in its employment, in broad terms, you have been a software developer with Fujitsu since 1998; is that correct?

Gerald Barnes: That is correct, yes.

Ms Price: You remain employed by Fujitsu?

Gerald Barnes: That is correct, yes.

Ms Price: You explain in your first statement that your first job with Fujitsu involved looking after a database of reports produced by Post Office clerks; is that right?

Gerald Barnes: That’s right, yes.

Ms Price: You then became involved in supporting the Electronic Point of Sale Service, or EPOSS, software for transacting at the counter and balancing, as well as looking after related reports?

Gerald Barnes: That’s correct, yes.

Ms Price: Can you recall the year in which you became involved in supporting EPOSS software?

Gerald Barnes: Not exactly, no. But pretty soon, I think, I got to grips with the reports and got that all under control, and the designer in the team thought, oh, probably about time to give me some more work to do too, because I kept doing the reports, I kept that all under control, but I sort of alternated it and got it moderately streamlined, so I had time to do other work and I think that’s when I started looking at other things.

Ms Price: So was it within the first year that you joined Fujitsu, if you joined in 1998?

Gerald Barnes: That’s pretty – moderately soon, I would say. I just can’t remember exact dates.

Ms Price: Whilst you were in this role, you did an evening class in bookkeeping; is that right?

Gerald Barnes: Ah, yes, that’s when I started looking at this balancing and I found that very, very interesting so, yes, in my own time I got some accounting qualifications just because I found so it interesting, that was why.

Ms Price: You give some examples of software which you developed at paragraph 6 of your first statement. Could we have that on screen, please. It is page 2 of WITN09870100.

At paragraph 6, you say:

“I remember writing a component called ‘Operation Launch’” –

Gerald Barnes: Yeah, there was quite a few things I did but that was certainly one of them. That was when we were starting looking at sales with debit cards and credit cards, yes. That was a part of that project.

Ms Price: You say:

“[It was] to facilitate the use of [those] debit and credit cards” –

Gerald Barnes: That’s right, yes, yes.

Ms Price: – “which was being introduced in the earlier version of the Horizon system [legacy Horizon].”

Gerald Barnes: That’s right, yes, that’s correct, yes.

Ms Price: You go on to give another example of software you wrote at paragraph 6, and you say this:

“I also remember writing the migration software that enabled a counter to transition from using Escher’s Riposte software platform to the new system (known as ‘HNG-X’ or ‘Horizon Online’). Because of this piece of work, I believe I was the last member of the EPOSS Riposte team, which was a large team during the time of Legacy Horizon.”

Just pausing there, you say it was a large team. Can you remember how large your team was?

Gerald Barnes: Not precisely, but 10 or 20, maybe. I couldn’t give you an exact figure.

Ms Price: You deal at paragraph 7 of your statement with the circumstances in which you moved to the Audit Team and you say this:

“In 2009 or thereabouts, whilst supporting the migration software for the remaining counters to transition to HNG-X, I also started looking at the audit system in HNG-X, which was a completely new area for me. It was around this time that I then joined the Audit Team. I recall that there was already an audit system in Legacy Horizon for Riposte that I knew little about then. When I joined the team, this audit system was being rewritten as part of the transition to HNG-X. For this reason, I have limited experience and knowledge regarding the systems and processes relating to audit and ARQs in relation to Legacy Horizon.”

It’s right, isn’t it, that the Audit Team was and remains responsible for providing to the Post Office, when requested to do so, audit data retrieved from the audit servers, for the purposes of Post Office investigation of and criminal and civil or disciplinary action against subpostmasters, their assistants and managers, and those employed by the Post Office; is that right?

Gerald Barnes: That’s partly true but you can – they have queries for very many other reasons why you want to look at historic data but, certainly, that’s one of the reasons, yes.

Ms Price: You have remained in the Audit Team since you joined in around 2009; is that right?

Gerald Barnes: That’s right, I’ve done a few other things as well, when things have been slack but I’ve always been responsible for the audit software and still am, in fact.

Ms Price: Turning please to the point at which you began supporting the EPOSS software, when you took up this role, were you aware that an EPOSS taskforce had been established in August 1998 to address the escalating number of PinICLs being raised which led to the taskforce reporting significant deficiencies in the EPOSS product, its code and its design?

Gerald Barnes: No, in fact – I certainly was not aware of that then and I’m not even sure I’m aware of it until you’ve just told me.

Ms Price: Do you recall the rollout of Legacy Horizon?

Gerald Barnes: Not the rollout, I don’t think, because it would have already started rolling out before I joined but there were numerous further releases, improved releases, all the time.

Ms Price: Do you recall ever being made aware of an Acceptance Incident in around July 1999 which related to accounts not balancing sufficiently or at all?

Gerald Barnes: I’m aware of quite a few non-balancing issues. Can you be more specific in giving me some – a pointer to this particular one? Is there a page reference?

Ms Price: At this stage I’m referring to an Acceptance Incident –

Gerald Barnes: Right.

Ms Price: – in the course of the negotiation of the rollout of Legacy Horizon. Do you remember being told about an Acceptance Incident that related to balancing?

Gerald Barnes: Not specifically. I might have been but I can’t be specific.

Ms Price: You have fairly recently been provided by the Inquiry with a number of documents relating to your involvement in reported issues with Legacy Horizon. I’d like to ask you, please, about a number of those documents. Could we have on screen, please, document reference POL00028747. This is a PEAK system management system log – “Peak Incident Management System” log. The call reference is at the top left of the document, PC0059497, and at the top right we see you identified as the call logger. At the risk of stating the obvious, does that mean that you logged the call to which this log relates?

Gerald Barnes: I think it means I cloned the call – well, yes I did, but it’s a call type cloned call. I assume, although I can’t remember for sure, that there must have been an existing PEAK which I then cloned for some reason and that’s why I became the call logger.

Ms Price: Can you help, please, with what a cloned call is –

Gerald Barnes: Oh, it’s just – you have PEAKs and, for some reason or other, you might want to have a copy so that one is used for one purpose in resolving issues and another copy is used for another issue in resolving issues. So you clone it so you’ve got two of them, and then one might take one path and the other might take another path.

For example, if you’ve got two releases going on and you want an urgent fix to go out to live in the first but you’ve got to catch it up in software being developed for the follow-on release, then you’d need two: one for the live and one for the follow-on release. That’s just one example. I mean, there are others.

Ms Price: The first entry on the log shows the call was made on 20 November 2000 at 13.19 and the entry at 13.20 says this:

“Receipts vs payments difference at 145004 for CAP 34. This is not a migration issue. This outlet has no other open calls on PowerHelp. Please investigate and confirm if this is a CI3 or CI4 office. If this is a CI4 office this may be a new problem.”

Was this your entry or not?

Gerald Barnes: No, no, that’s – no, that’s customer call. That wouldn’t have been something I added. That would be – no, no, that would be someone else’s entry. My entries would always have my name – where it’s got “User:_Customer call_”, my entries would always be “User: Gerald Barnes”.

Ms Price: Going over the page, please. Towards the bottom of the page there is an entry dated 8 December 2000, timed at 12.33. If we could zoom in a little on that, please. Here is an example of just that, “User: Gerald Barnes”; so is this is an entry made by you?

Gerald Barnes: Yes, this would definitely be, yes.

Ms Price: It says:

“New evidence added – Messages produced when stock unit OOH was rolled.

“F) Response:”

Can you help with “OOH”?

Gerald Barnes: Oh, that’s just the name of the stock unit so all the stock units are given different names and that’s just one of the names. It could be anything, really. Any – it’s just the same of the stock unit.

So I mean, in an office – well, if it’s a very small office, you might only have one stock unit but you could have more than one or many. In a very big office, you’d have many stock units.

Ms Price: You say here:

“This is another case of transactions being dropped. At CI3_2R this happens with no error logged. At CI4L1 and above, it is often the case that an error will be reported to the user in such cases.”

Can you help with what you were referring to by CI3_2R?

Gerald Barnes: Right, these are just the names of the releases. So, well, I mean, I can’t remember in detail but in general each release rolled out of the EPOSS software would have some reference number and, although I can’t remember that far back, these must have been the references to the various releases.

Ms Price: You go on:

“I will have another look at M1 rollover and see if any further improvements can be made in error trapping to catch other Riposte Errors.”

You refer in this entry to this being another case of transactions being dropped. Is it fair to say, therefore, that this was an issue which you knew was not an isolated one?

Gerald Barnes: Well, we’re going back in time a long way but I didn’t write all this cash account software myself originally but I did spend a lot of time looking at the code and looking at PEAKs and trying to improve it. So it sounds like I could see another place where it could be improved, in this case to try to make the error handling better than it was before.

Ms Price: But this issue of transactions being dropped, you’re referring to that as being another case.

Gerald Barnes: Yes.

Ms Price: So transactions being dropped, this isn’t, it seems, an isolated case of that?

Gerald Barnes: Yes, I – from what I wrote, that must be – it must be right, yes. I can’t specifically remember that far back but I’ve written what I’ve written so, yes, it must be the case that I’ve spotted this before.

Ms Price: Is it right, on the face of your entry here, that this was a problem caused by a Riposte error?

Gerald Barnes: Oh, that’s right. That’s right. So the very – the basic Riposte errors should – if they go wrong, they should – they have their own error mechanism, which you should be able to catch. And I think what I’m saying is that the errors weren’t being caught properly, that’s what I’m saying. So they could have failed and not been noticed. Though I’ve subsequently discovered, actually, you tend to get what’s called an event written to the event log always. So one of these Riposte calls failing, although it might not be caught in the software, typically it would go to the Windows event log and would get something, a red event there.

So, after the event, you would be able to spot these things by checking the Windows event log but the software itself did not catch the errors and, in my view, that’s much better, if the software itself catches the errors and reports back.

The ideal case, if it was really written perfectly from the word go, anything goes wrong, when the postmaster rolls over the stock unit, you should have a message it’s logged somewhere in the event log – doesn’t matter where, somewhere – and a clear message reported to the postmaster “This has gone wrong, please contact the Helpdesk”.

That’s how ideally it should all work but, at this time, it wasn’t like that.

Ms Price: Being specific, this error, where it occurred at CI3_2R did not result in an error message coming up for the user. That’s what you’re suggesting?

Gerald Barnes: That’s what it says so I suppose that’s right, yes. I mean, I can’t really remember that far back in detail but that’s what I’ve written.

Ms Price: So if it happened and appeared as a misbalance to the user, is it right that it would require further investigation of the message store or, depending on the timings of the investigation, the audit data, to explore whether an error in Horizon was to blame?

Gerald Barnes: That’s right, yes. That’s right.

Ms Price: What if the user did not report the issue and there was therefore no investigation?

Gerald Barnes: If the user didn’t report – well, then it goes unnoticed but there will be some sort of error if the – well, hmm, I suppose it’s always possible if nothing is noticed, I suppose. But, yes, unless the user reports something, then we’re not going to know about it, I would say.

Ms Price: You say in your entry that at CI4L1 and above, it is often the case that an error will be reported to the user –

Gerald Barnes: It looks like things were improved then, yes.

Ms Price: You say “often” but not always.

Gerald Barnes: Yes, that’s right. That’s right. I think that’s probably right.

Ms Price: In the context of a balancing problem, a failure in error reporting is a significant problem, isn’t it?

Gerald Barnes: Definitely. I would say absolutely, yes. Yeah, definitely.

Ms Price: Going over the page, please, there is another entry made by you, dated 11 December, which is three entries down. If we can zoom in a little, please. You appear to record that a fix was implemented. Was this a fix that was carried out by you, can you say?

Gerald Barnes: Doesn’t say explicitly there. It’s the sort of thing I’m likely to be involved with but I can’t say for sure one way or – it might have been another developer. It is not explicit, is it? I can’t remember this event well enough to be able to be assertive in my response there, other than what’s written. So might have been me, might have been another developer.

Ms Price: Two entries down, on 18 December, we can see an entry from Clifford Sawdy. He notes that there is:

“… no specific test that can be performed to prove a fix for the original problem regarding missing transactions.”

Then there is an entry on 17 January 2001, which says this:

“We’ve run through complete M1 test cycles, and subsequent stock unit rollover and cash account testing as described above by Cliff, and have been unable to reproduce this error. Suggest this is now closed.”

Then, finally, an entry on 18 January, starting at the bottom of the page, going over to the next, please, and it says:

“Closing call as fixed at future release [date] PM has not been informed.”

Why would the postmaster not be informed about the outcome of the investigation?

Gerald Barnes: I couldn’t answer that question. I was fourth line support. This would be some higher up level of support. I don’t know the answer to that question.

Ms Price: There are no further entries on this log to evidence any further check to ensure the problem, which had not been reproduced by testing by this point, would not be an issue in future releases. We can’t see any evidence of that, can we?

Gerald Barnes: That’s right. Well, I think – well, it’s a long time ago but my guess is that it was some sort of intermittent problem and, therefore, very difficult to test. You can’t really, if it’s intermittent failure, you can’t really. It’s very difficult. The best they could do is regression test everything and, if someone, which might have been me, has just simply improved the error handling in some area, all that means is that, next time this intermittent problem comes up, you’d have more evidence than before the improvement in error handling.

Ms Price: At the time, did you recognise the implications of an error in Riposte, or otherwise, causing a discrepancy in the accounts of a branch without the user in branch being aware of that error?

Gerald Barnes: I don’t think I would have thought about it. These were just technical issues to me, which I did my very, very best to fix, but I don’t think my mind would go in that direction, really.

Ms Price: Could we have on screen, please, document reference POL00028750. This is a PEAK which appears to relate to the same call on 20 November 2000, which was the subject of the last PEAK we’ve just looked at but it has an extra entry from you, and we don’t have the reference to a cloned call. So is this is an example of another document that records the call that’s used for a different –

Gerald Barnes: Well, I don’t know for sure but possibly this is the original call and the call you showed me first of all is cloned from it. That’s possible. I don’t know for sure to without checking it carefully.

Ms Price: Could we look, please, to the bottom of page 2. We see there the same entry we’ve just looked at. We don’t need to zoom in on that but just to show you that it is the same entry there on 8 December. Then, over the page, please. The top entry there, also 8th December – if we could just zoom in a little, please – we see at the very top of the page:

“Call PC0058161 cloned to new call PC0059497.”

Then this entry from you here at 12.38, and:

“As already stated … this is a case of Riposte System calls failing with no error being logged. At CI4L1 things are much better. The call has been cloned … to improve even further still the logging of Riposte System call errors in stock unit rollover.”

You may not be able to help at this remove but can you help with what you meant by “At CI4L1 things are much better”?

Gerald Barnes: Well, I – I can’t remember the details but, presumably, I wrote that because I was aware that a lot of fixes were going into CI4L1 but my memory isn’t that good. I can read what I wrote but that’s all I can imagine was the case, that, in general, we’d done more PEAK fixes for CI4L1, the development team in general and, no doubt, I helped in that too. So we thought that things would be better in that release.

Ms Price: It does not sound from this entry as though CI4L1 was a complete fix, does it?

Gerald Barnes: Oh no, no. I mean, you could never get every single bug from a system. That’s just – you do your best but it’s just impossible. There’s always bound to be some bugs in systems.

Ms Price: Turning, please, to your knowledge of later issues with Legacy Horizon, could we have on screen, please, FUJ00090436. You refer in your statement at paragraph 6 to your involvement in Operation Launch and we’ve looked at that reference, which you say related to facilitating the use of credit and debit cards in relation to Legacy Horizon. This appears to be a report relating to Operation Launch. The release referenced is BI3(S70). We can see you’re listed as the originator and department. Were you the author of this document?

Gerald Barnes: Yes, I think so, that’s when you – yes, I think so, yes.

Ms Price: The document is dated, looking to the top right-hand corner, 12 January 2005. Could we turn, please, to page 6 of this document. Under the heading “Non-Functional Tests” and the subheading “Performance” we have this:

“Pool paged bytes for both Desktop and Riposte were monitored with Performance Monitor for 7212 cycles of the soak test … which meant over 14,000 operations were launched. No memory leakage was detected in the Desktop at all – for Riposte the figures are given in the table below.”

Then you set out some figures. You say:

“This is much more likely to be a problem with Riposte than with Operation Launch since Operation Launch shares its memory with Desktop.”

Then, over the page, please. There is summary of problems found:

“The only possible problem found was that Riposte may have a memory leak. It is considered beyond the scope of this module test to progress this further.”

Can you help, please, with what you mean by a “memory leak”?

Gerald Barnes: Right, well, this is something you’ve got to look out in software development. Sometimes you allocate memory dynamically, for some temporary period of time and then you’ve always got to be sure to delete the memory block when you’ve finished with it. If you don’t, you can end up with a memory leak, where your program just starts using more and more memory until eventually you run out of memory. So you’ve always got – if you’re testing something thoroughly, you should always test for memory leaks to make sure your new component doesn’t have any memory leaks.

Ms Price: What were the implications of a memory leak for the functioning of Riposte?

Gerald Barnes: Oh, well, I don’t think the figures were that big a memory leak. So as long as it’s small, you can get away with it. It’s only if you have a big memory leak that you have real serious issues. As long as it’s just some small problem, then you can get away with that.

Ms Price: Could we have on screen, please, FUJ00154684. This is a PEAK log relating to a call from the National Business Support Centre on 20 December 2007. The log reference is PC0152376. You have dealt with this PEAK at paragraphs 29 to 33 of your first statement. About halfway down the page, we can see a summary of the issue being raised. Starting with “Ibrahim”:

“Ibrahim from the NBSC has asked that an issue be investigated by our software team regarding discrepancies still showing when the MIS stock unit is rolled to clear the local suspense account.”

Then under, “Incident History”, there’s some more detail. It says:

“On Wednesday 12/12 the BM stock unit had a gain of £465.73. As this stock unit rolled over it was forced to clear local suspense £1,083.76. The gain of £465.73 did not go to local suspense and is not included in the £1,083.76.

“This was not the last stock unit to roll over. The last stock unit to roll over was MIS at 10.20 on 13/12. This stock unit had no discrepancies. MIS is a correction stock unit and was not inactive as it is rolled over every BP.

“The suspense account and final balances corroborate the above as the office has sent us copies.

“The trading statement agrees with the suspense account and that BM stock cleared suspense but did not send its gain to suspense. The trading position line should always show zero. Under the BM stock column it shows £465.73.

“I have had a trial done on BM stock to see if this is showing the £465.73 but it is not.”

So the problem being reported was one of discrepancies in the account; is that right?

Gerald Barnes: Yes, that’s right. That’s right, yes.

Ms Price: If we could go to page 3 of this document, please, there is an entry made by you on 2 January 2008, that second entry down, in which you say:

“The fact that EPOSS code is not resilient to errors is endemic. There seems little point in fixing it in this one particular case because there will be many others to catch you out. For example when I tried to balance with CABSProcess running I found that declaring cash failed with the same sort of error message!”

Pausing there, can you explain, please, the role which EPOSS code played in relation to the error which was operating in this case?

Gerald Barnes: Well, I mean, the EPOSS – that is what the stock balancing is – it is the EPOSS code, is – stock balancing code is part of the EPOSS code but the EPOSS code is more general than that. There’s lots of EPOSS code. For example, just selling a stamp would be EPOSS code but also, more specifically, stock balancing would be part of the EPOSS code.

Ms Price: Which errors was EPOSS code not resilient to?

Gerald Barnes: Well, it’s – we just spotted cases where the error handling was not as good as it could have been, which we tried to eliminate over the years. So sometimes calls to write out a message would fail silently, though as I mentioned before, though silently to the code, you always get a red event written into the Windows event log, so you can – so the postmaster wouldn’t be directly aware of the failure but analysis of the logs after the event would show the problem.

In my opinion, it would be far better if, when something like this went wrong, immediately the software should abort and the postmaster should just be told “An error has occurred, please contact the Helpdesk”, or something like that. So the error handling wasn’t as good as it could have been if designed properly from the start, but that’s not to say that the evidence wasn’t there to spot the problem after the event because we get information in the Windows event log, et cetera.

So what I’m saying is the error handling, in an ideal world, could have been done much better but, nevertheless, it’s not to say that you can’t detect the problem, because you can, and –

Ms Price: Apologies. That is what you’re referring to, is it, when you say that the code was not resilient to the errors, to the error handling process?

Gerald Barnes: That’s right, yes. So it’s just not as good as it could have been: ideal behaviour, any problem, log it, abort, just say to the postmaster “Please contact the Helpdesk”. That would be the ideal error handling in my view.

Ms Price: The reason for this, was this because there were deficiencies in EPOSS code itself?

Gerald Barnes: Well, in the error handling. I mean, I thought the EPOSS code was quite clever, really, but in the error handling, it wasn’t done as well as it could have been done, had the time been taken to do so. But the code itself – in programming you have what’s called the happy path. The happy path is when everything is being done well. In the happy path there’s no problems.

Ms Price: Was this view, the fact that the EPOSS code was not resilient to errors was endemic, a view that was held within your team at the time?

Gerald Barnes: Well, I only spoke to a few colleagues about the issue. I could give you some hearsay quotes, if you like, but I can’t give names, I don’t think.

Ms Price: Were there others who shared your view?

Gerald Barnes: Well, the people I talked to didn’t seem to think that way. For example, one colleague said, “Well, you’ve got to assume all this fundamental stuff works, you’ve just got to assume that”. Another colleague said, “Well, when it’s all developed in the first place, it was assumed that all the error handling would be automatically added later”.

So the two colleagues I informally mentioned this to didn’t seem to quite share my views, to be honest. But that’s not – I didn’t mention it to everyone in the entire team, though. So …

Ms Price: In your entry, you gave an example of trying to balance with CABSProcess running, and declaring cash failing with the same sort of error message. Can you explain what the CABSProcess was, please?

Gerald Barnes: Yes, well, it’s – it was just a piece of software run each evening about 7.00, which just – you have end of day markers which jot the – divide each day and it just summarises all the transactions that go on in the day in some way. I can’t remember the details beyond that but it’s just a summary of transactions that occur each day, around about seven o’clock every evening.

Ms Price: You describe in your first statement at paragraph 27 that an issue relating to the CABSProcess could cause potentially incorrect data to be presented to the audit system. Is that what happened here?

Gerald Barnes: Yes, that’s right. I mean that’s right. So because of it, the messages logged are incomplete. Yes. Nothing is wrong with the audit system itself but the data to be presented to it later would be incomplete.

Ms Price: You go on in this entry in your PEAK to say this:

“It may be worth passing on the general message to the HNG-X team that, in many cases code should always try and exit gracefully after an error and not just blunder on regardless.

“This is a perfect example of why. Had the balancing code exited gracefully then if the user had tried again after CABSProcess had finished working then all would have been well.”

Was the effect of this, the code not exiting gracefully, that which to which you refer at paragraphs 29 and 31 of your first statement, that the failure is silent.

Gerald Barnes: That’s right, yes. Well, relative – well, silent to the postmaster. As I say, information is available in the event log. It would be available to a diagnostician, looking at it, but silent to the postmaster.

Ms Price: Could we have paragraph 31 of Mr Barnes’ first statement on screen, please. It is page 15 of WITN09870100. You explain the silent failure point in this way, at paragraph 31:

“The fact that the failure was silent was really bad error handling. Good programming practices would be to abort (ie for the code to stop running) with a clear error message. It is better to produce no results than incorrect results, and good error handling should be coded from the start. However, my understanding is that in PEAK PC0152376, an error was written to the audit log and then processing continued, so although the operator at the Post Office branch would not know anything had gone wrong, a detailed analysis of the audit log after the event would have revealed the problem.”

This is substantially the same point, is it not, as the point which arose in the context of the Riposte error under CI3_2R in 2000, the lack of an error message, meaning that the user is not alerted to an error in the system having occurred. Would you agree that it is substantially the same problem?

Gerald Barnes: Yes, that’s right, yes. It’s the same sort of thing.

Ms Price: It is arising now in 2007, into 2008. Could we have back on screen, please, FUJ00154684, page 3, please. You proposed a fix to the problem on 2 January 2008, and you explained it in this way, starting 4 paragraphs down in your entry:

“For the time being I propose a much cheaper solution than rewriting a lot of EPOSS error handling.

“The problem is that because of a previous PEAK … CABSProcess writes out messages atomically. It does a StartTransaction quite early on (which creates the lock), then initiates writing lots of transactions with CreateMessage and persistent objects with PutObject, and finally really writes them with a call to EndTransaction (which ends the lock). If something else tries to write a transaction whilst CABSProcess has things locked then it will time out after 10 seconds. Hence if CABSProcess takes more than 10 seconds to run, you could get this sort of problem. In this case, CABSProcess took 33 seconds to run which gives a significant window of opportunity for this sort of problem to occur. I suggest addressing this matter directly by having CABSProcess store all that it wants to write out to a collection and then only really write it out at the very end. In this way the system will be locked for less than 10 seconds and there will be no possibility of this sort of problem.”

Then two-thirds of the way down the page, you deal with the “Impact on User”, and you say:

“Benefit of making a fix.

“It will no longer matter if CABSProcess is running when the user tries to do many sorts of things, balancing included.

“What does the user have to do to get this problem?

“Do anything which involves writing a transaction whilst CABSProcess is running (after 19.00) when CABSProcess has sufficient work to do so that it takes more than 10 seconds to run (so probably on the larger offices).”

So just to be clear, you were warning here that the CABSProcess issue could impact upon balancing?

Gerald Barnes: Well, if the postmaster is working after 7.00, yes, that’s right. Well, or to be more precise, he’s working through 7.00 because that’s when this process ran. So I suppose he’d have been all right if he started balancing at 7.30, for example.

Ms Price: You go on to covering the impact on operations and you say:

“Benefit of fix that may not be visible to end user.

“Less support calls.”

So, in summary, you thought the risks of running a fix were outweighed by the benefits?

Gerald Barnes: Yes, it was quite an easy fix, really. So I thought quite safe.

Ms Price: Under those “Risks”:

“What live problems will there be if we do not issue this fix?

“Problems will continue to occur if the counter is being used whilst CABSProcess is running, in those cases when it takes more than 10 seconds to run.”

Referring to that risk again, in terms of operations:

“Is this a high risk area in which changes have caused problems in the past?

“Yes. However the fix proposed is self-contained and is considered unlikely to cause any problems.”

Going over the page, please, towards the bottom of this page there is an entry from David Seddon, dated 10 January 2008. He says this:

“It has been decided that no fix will be carried out for the time being given the rarity of the problem. Should the problem become more prevalent then the need for a fix will be reviewed once again. In the meantime KEL dsed5628Q has been created to cover the problem.

“With regard to this instance of the problem we have already passed details and corrective actions necessary to Post Office Limited by means of a BIM issued by the MSU … Therefore no further action is necessary and this call can simply be closed.”

Should we take it from this that a decision was made that, despite your recommendation, it was decided by 10 January 2008 that no wider fix would be implemented? So we have this narrow fix, but it appears no wider fix to the problem.

Gerald Barnes: That’s right. That is correct.

Ms Price: Could we have on screen, please, FUJ00155261. This is an email chain from September 2008. The first email in the chain starts on page 2, could we turn to that, please. Towards the bottom of the page, this is an email from Gareth Jenkins to Roy Birkinshaw, copied to you, Steve Evans, John Burton and Anne Chambers. It is dated 4 September 2008 and reads as follows:


“As requested yesterday, I’ve had a look at the relevant code and a chat with Gerald and I am satisfied that the fix that Gerald has proposed for this PEAK is low risk and should remove this particular cause of timeouts. The actual PEAK is now closed, so I’m not sure exactly what process should be followed, but effectively what I think we need is for the PEAK to be reopened and sent to RMF for further consideration in light of recent investigations.

“Are you and Steve able to progress it from here?”

In the email above, Steve Evans asks you to “liaise with Dave Seddon/Lionel to get this reopened and then back to RMF”. What was RMF?

Gerald Barnes: Release Management Forum.

Ms Price: Going back to the bottom of page 1 of this document, please, this appears to lead to an email from John Budworth to you, and a number of others, including Mik Peach and Gareth Jenkins, and he says this:


“PEAK 164429, (clone of 152376) has arrived in RMF. At the moment this is the only PEAK in RMF. I’m not sure why this has been revisited by CounterDev and Gareth as we decided we were not going to fix this back in January.

“Has something in live increased the problem or has it beed [I think that should be ‘been’] raised as an issue by the customer or elsewhere? I don’t know.

“Anyway, CABSProcess is start of LFS_COUNTER. I am not expecting any other LFS change during Horizon but it might be worth looking at LFS related PEAK 147179.”

Then going to the email above that, please, we have an email from Steve Evans:

“I note that Mik has replied, and yes this one has become a higher priority with the customer.

“It’s not related to PC147179, which I’ve actually just returned ‘no fault in product’, so doesn’t exist any more.

“Gerald has requested a target of T86 and he has gone off on leave until 23 September. Therefore a fix will not be available before the 25th.”

In this email, the customer, was that the Post Office?

Gerald Barnes: Yes, that’s correct, yes.

Ms Price: So was it your understanding that the Post Office were at least by this point aware of the issue that had arisen?

Gerald Barnes: Where does it say, “The customer”, exactly?

Ms Price: The first line of that email, “I note that Mik has replied and yes this one has become a higher priority with the customer”.

Gerald Barnes: Yes, I suppose that must be the case, I suppose. It must be.

Ms Price: John Budworth’s email reply is above that, and he says:

“Thanks all,

“I’ll check RMF stack again tomorrow but nothing other than this PEAK in there currently. I’ll authorise PEAK 164429 for T86 but would like to move this forward sooner rather than later so test and deploy as early as possible in October.”

To the extent that you are able to recall, were you involved in any discussions about the merits of a fix between the 10 January 2008 decision recorded by David Seddon in that original PEAK and the email from Gareth Jenkins on 4 September 2008?

Gerald Barnes: No, I was kept out of the loop completely. I’d have been busy looking at other PEAKs, I expect so, no, I wasn’t aware of that.

Ms Price: You explain at paragraph 32 of your statement, that is your first statement, that, having reviewed the PEAK at document reference FUJ00155366, you can see that a fix was applied on 25 September 2008 and that you were involved in applying that fix, is that right?

Gerald Barnes: Yes, that’s correct, from just my review of documents recently, because of this – my witness statement, yes.

Ms Price: You also explain at paragraph 32 that you had some involvement checking event logs in January 2009. Could we have on screen, please, FUJ00154836. About halfway down the page is an email from Penny Thomas to you and Steven Meek, copied to Gareth Jenkins and Anne Chambers. It is dated 31 December –

Apologies, if we can scroll out, please, that bottom email, 31 December 2008, please. The email is dated 31 December 2008. The subject line is “ARQs 499-509”, and then a reference “475329 – LPD 19 January 2009”.

Ms Thomas says:

“Hi Gerald

“Could you please check events for the following …”

Then giving that reference with the date range of 21 September ‘07 to 17 August ‘08.

“Many thanks


You then send an email in reply on 5 January 2009 a bit further up the page. Scrolling down, please, we can zoom out and see the whole document. That’s fine. You appear in that email to attach some results.

“Hi Penny,

“I attach the results.


“Gerald …”

Then at the top of the page there is an email from Anne Chambers to Penny Thomas, copied to Gareth Jenkins and to you. It says:

“475329 counter 3 Lock events 28 March 2008, 22.04, checkpoints being written during Smartpost upgrade. Just confirm no one logged in.”

Does this description at the top help you at all to say what you were checking events for, or not?

Gerald Barnes: I can only remember the general process. Once I joined the Audit Team and I think subsequently I didn’t know this at the time but, reading all the material for my appearance today, I’ve discovered it, because of my old statement, the error handling wasn’t very good, a new system was introduced where the event logs was always checked before any spreadsheet – any spreadsheets or transactions were sent to the Audit Team, by the Audit Team, to check that there were no suspicious events, that occurred at the time of the transaction as reported in the ARQ.

So a database of all the event logs – all the event logs were extracted, they were stored in some sort of database and I was partly involved in, when requested, getting events back for a given date range.

So what Anne is saying is she’s looked at these events, and decided that, other than these three lock events, there was nothing suspicious in the event logs that I returned to her. She’s saying those are the suspicious ones and, moreover, she’s saying, as long as no one was logged in at the time, they don’t matter.

Ms Price: Can you say whether this check related to the CABSProcess issue we’ve been looking at or not?

Gerald Barnes: I think – well, as I say, because I have reviewed the – what went on from the information presented to me before my appearance today, I can say yes, because of my comments – not necessarily the CABS issue, just because I said in general – these Riposte errors can be silent to the postmasters in general, well, the CABSProcess as well, but in general it is the case, that the policy was adopted of always checking all the event logs for any ARQ evidence presented, so that we can be – they could be more certain that nothing like that had gone on.

So that was a new – because of my – it now transpires from what I’ve read, because of my comments, this new process was adopted. But it was all – never – at the time I knew nothing about it. It was all completely silent to me but I can see that is the case from what I’ve read subsequently. Just – well, in recent weeks.

Ms Price: Could we have on screen, please, FUJ00155402. About halfway down the page is an email from Penny Thomas to Steve Evans, among others. It is dated 8 January 2009, so three days after you replied with results in the email chain we’ve just looked at. Ms Thomas says this, under a subject “Audit Issue”:

“As a result of our meeting today the following actions have been agreed:

“1. We will event check all transaction data supplied to POL where that data falls between May ‘07 and November ‘08.

“2. The check will focus on events where the CABSProcess has produced a lock from 1900 hours to 1910 local time.

“3. Penny to provide a list of 195 outlets with time frame.

“4. Alan to provide query.

“5. Gerald to run the event check through the database.

“6. Steve Denham to be advised the number of residual events and will discuss with Mik Peach.

“7. Residual events to be reviewed.

“8. Penny (or cover) will check ARQ data retained in the audit room or retrieve message stores, as required.

“9. Pete to update security incident register.”

So it appears that an agreed action for you, although you are not on the recipient list here, was for you to run the event check through the database. Can you help with what the event check was?

Gerald Barnes: Yes, well, I can remember this moderately well. It was all automated later on but, before I joined the Audit Team, as I said, they brought all the events back from the audit system for all counters and they stored them all in a database, so that, for any given date – date range and I assume FAD code too – I can’t remember it specifically, I imagine FAD code too – you can get all the post office counter events output in a spreadsheet, which could be supplied.

Ms Price: Ms Thomas’ email was then forwarded to you by Steve Evans, scrolling up the page, please, with a request to discuss:


“We will need to discuss this (below) in the AM.


Do you recall discussing the task case you had been allocated with Mr Evans.

Gerald Barnes: I remember the task I don’t remember specifically discussing it but, if it says in the email this was going to happen, I imagine it did. I certainly remember the database of all the events. I remember that quite clearly.

Ms Price: Could we have on screen, please, FUJ00155421. About a third of the way down the page, we have an email from Penny Thomas to Dave Posnett from the Post Office. It is dated 4 February 2009. The subject line is “Security Incident”. Ms Thomas says this:

“We are pleased to advise that our analysis of data covering 1 May ‘07 to 30 November ‘08 has been completed.

“The event logs have been checked for all data provided to POL as a result of the 195 ARQs which fall within the time frame. A total of 27 instances of concern were identified. All instances have been fully analysed and we can confirm that the locking was caused by contention between the EoD process and a Riposte checkpoint being written. No transactions or balancing activities carried out at the branches were affected.”

There is reference here to the 195 ARQs which fell within the time frame. There was a reference in Penny Thomas’ email of the 8 January 2009 to there being 195 outlets within the time frame. Would you agree, therefore, that this email seems to be referring to the same issue?

Gerald Barnes: I would say almost certainly but, I mean, I couldn’t be 100 per cent certain, I suppose. But I would say very likely.

Ms Price: Do you recall being made aware of the outcome of the checks that were done on the data provided to the Post Office?

Gerald Barnes: No, it was all – unless it was copied to me in an email and I didn’t read it or something, it was all – I was aware of the checking of events, but the reason it was done was, at the time, not something I was aware of, though now I can see the reason. But, at the time, I would – I don’t think I was aware, actually, no.

Ms Price: Your task of running the event check through the database, was that to return results which were sent on to others for analysis?

Gerald Barnes: That’s correct, yes. That’s correct.

Ms Price: You offer some reflections on the CABSProcess issue at paragraph 33 of your first statement. Could we have that on screen, please. It is page 16 of WITN09870100. You say here at paragraph 33:

“The CABSProcess issue highlighted a problem that could easily be caused by another system process at any time of day. In retrospect, error handling should have been tightened generally. For example, when I wrote the software to migrate from Legacy Horizon to HNG-X, I kept this in mind. The postmaster pressed the migration button which appeared on migration day and if anything went wrong the postmaster got a message displayed saying something to the effect of: ‘An error has occurred please contact the Helpdesk’. The program then stopped further processing and detailed evidence was recorded that would enable the Helpdesk to identify the issue (possibly after escalating the issue to me). In my opinion, this sort of error handling is the safest. When something goes wrong everyone knows about it immediately and nothing is committed – in this case, the post office branch would not be migrated and needed to continue using Legacy Horizon a bit longer.”

Does it follow from what you say in this paragraph that, in retrospect, error handling should have been tightened generally, that although there was a fix done following the CABSProcess issue, as far as you were aware, there was not a wider change to coding to prevent silent failures in the system?

Gerald Barnes: Not in a – not in a comprehensive manner. I think little improvements were done all the time but I think, ideally, just as when I designed this migration software, before they even started, they should consider the possibility of some system code failing. What do we do if that happens? And when you’re choosing a cash account, the obvious thing to do is just display a message to the postmaster that “An error has happened, please contact the Helpdesk”. Just as in the migration software, similar thing. Anything goes wrong, just log as much information as possible, and just say to the postmaster clearly and precisely “Please contact the Helpdesk”. Don’t just sort of roll over silently as though he thinks it’s all – everything is fine when it isn’t.

That, in my opinion, no – I mean, retro – hindsight is a wonderful thing, isn’t it, but, in my opinion, that’s the way error handling should have been done.

Ms Price: Should the Chair take it from this paragraph that you consider the CABSProcess issue was a missed opportunity to address deficient coding practices which led to silent failures?

Gerald Barnes: No, well, I mean, I think it was two – we’re just about to replace Horizon with HNG-X so the better thing to do would be to make sure the HNG-X software learned the lesson, I think. It would just have been too expensive to do a thorough job at that stage.

Ms Price: There was, in fact, another issue with which you had involvement in January 2008, in addition to the CABSProcess issue which caused you to comment on the adequacy of the error handling process, and that’s one that’s addressed at paragraph 38(a) of your first statement.

Could we have on screen, please, FUJ00155224. This is a clone of another PEAK and this cloned call contains some comments from you following the report of a stock unit rollover issue which was being experienced by a user in branch.

Could we go to page 6 of this document, please. The second entry on this page is made by you and is dated 15 January 2008. Starting on the second line of your entry, you say this:

“The problem was in fact already flagged. A message in the audit log pinpointed the precise message that caused the problem.

“The error handling of balancing is deficient in some ways. In most cases an error is just logged and the code blunders on regardless leaving the system locked. What should happen is that the error should be logged, the process cleanly aborted, an error message displayed to the user and the system left so that he can do something else. I hope the HNG-X version is much better. I am not sure it is worthwhile spending time improving the EPOSS version which is shortly to be replaced; it would be better spending the same effort making the HNG-X version better. I had already requested that this be advised to the HNG-X team in PC0152376.”

So you were, once again, flagging the error handling problem as you saw it?

Gerald Barnes: That’s right, yes.

Ms Price: As far as you were aware, is it right that no material changes were made to this wider problem relating to error handling at the time?

Gerald Barnes: Well, it would have just been uneconomic, it was too late but we were always doing little improvements, though, but it was just – it would have just been – you see, once the system is rolled out and you start – is in maintenance roll and developers, like I, are just maintaining it, always, when you do a fix, you really want the minimum code change to effect – to solve the problem because it reduces the amount of regression testing needed for the release.

To comprehensively rewrite the error handling would just be a massive job. That would be a massive regression testing exercise and so would be extremely expensive and, since it was just being rewritten anyway, it seemed particularly pointless.

Ms Price: You say in your statement that you bore in mind the need for good error handling processes, when you wrote the software to migrate from Legacy Horizon to Horizon Online and you’ve just discussed the issues there would have been by, on a shorter term basis, making changes. Can you help with whether there were any other steps taken by you or anyone else within Fujitsu to ensure that good error handling processes were introduced across the board, either at the time of the migration to Horizon Online, or later?

Gerald Barnes: Well, I can’t – I passed on my comments to the HNG-X team, I hope they got passed on. I can’t say, though, I was not really involved in that software. Certainly, the migration software I wrote, very much took that into account and I wrote it with the very comprehensive error handling in the first instance and, indeed, every counter was migrated from Horizon to HNG-X with not very many issues, really.

Ms Price: Sir, I wonder if that might be a convenient moment for a short afternoon break. I think you’re on mute, sir.

Sir Wyn Williams: Does short mean less than 15 minutes?

Ms Price: Yes, please. Ten minutes, sir, if we could.

Sir Wyn Williams: Okay. Ten minutes. So when do we start?

Ms Price: That takes us to 3.35.

Sir Wyn Williams: Right, thank you.

Ms Price: Thank you, sir.

(3.23 pm)

(A short break)

(3.35 pm)

Ms Price: Hello, sir, can you see and hear us?

Sir Wyn Williams: Yes, thank you.

Ms Price: Mr Barnes, turning, please, to events after you joined the Audit Team in 2009. You say in your statement at paragraph 13 that when you joined the Audit Team, Fujitsu was changing from using Escher’s Riposte software to Fujitsu’s own bespoke software, HNG-X. Can you explain, please, what that meant for the software which was used to perform ARQs for the Post Office?

Gerald Barnes: Right, well, it’s – because we’re no longer using Escher’s Riposte system – well, there are two things. First of all, the audit software took – used the Escher software to produce its spreadsheets of results, which I only discovered this through reading, really. I wasn’t there at the time. But it used the Escher software to produce its spreadsheets and results, so that was one thing. So, therefore, just to save the licence fee that we paid to Escher, we wanted to get rid of that component.

But, on top of that, in addition, the Audit Team had to cope with the new transactions which were going to be written by the new HNG-X software, which was a Fujitsu rewrite of the – everything that was done by Riposte before was going to be rewritten by Fujitsu.

So the audit software had to cope with this new format transactions too and so a team that mainly completed their work before I joined the Audit Team wrote a component called the Query Manager Service, whose purpose was to produce the spreadsheets very similar to that which was produced by the old audit system for Riposte and, in addition, at the same time, enable it to produce those spreadsheets for the new HNG-X software.

Ms Price: Is it right that it was – is it XQilla –

Gerald Barnes: XQilla.

Ms Price: – which was used after you joined the Audit Team to run audit queries?

Gerald Barnes: That’s correct, yes. That’s right.

Ms Price: Is it right that the Audit Team still uses XQilla to run audit queries today?

Gerald Barnes: That’s correct, yes.

Ms Price: Does it follow from the fact that you only joined the Audit Team in around 2009 that you would not have been familiar with the high-level design documents relating to Legacy Horizon, covering the design and requirements of the audit harvester created in the early 2000s?

Gerald Barnes: No, I wouldn’t have been – I wasn’t very aware of that. I knew a little bit about it but I wasn’t aware of the details.

Ms Price: You deal with the audit query process for Horizon Online at paragraph 16 of your first statement. Could we have that on screen, please. It is page 6 of WITN09870100. At paragraph 16, you say:

“In relation to HNG-X, the process of generating a spreadsheet of transactions (similar to the ARQ spreadsheet) is as follows:

“a. Files to be audited are placed on many ‘shares’ across the estate. A share is a folder of a compare that is accessible by another computer.

“b. ‘Gatherers’ on the audit server bring the files into the audit server, where they are stored on a special long-term storage device (known as an audit archive – Centera to begin with, which was later replaced by Eternus) and indexed using a Structured Query Language (‘SQL’) database on the audit server. A checksum of the file is also stored too (a checksum is effectively a unique numerical identifier that is allocated to a file).

“c. A special tool on audit workstations can then be used to display stored files and retrieve them. As these stored files are retrieved, their checksum is checked. Some of the stored files are files of transactions and extra software is available to generate spreadsheets of transactions.”

Is it right that you were not personally involved in responding to ARQs that were submitted by the Post Office to Fujitsu in relation to investigations, court proceedings or disciplinary proceedings?

Gerald Barnes: That’s correct. I simply was part of the team that supported the software they used.

Ms Price: You do address in your statements, however, your involvement in a number of issues which could affect the accuracy of ARQ data, and one of these is the duplicate transactions issue which arose in 2010. You’ve addressed this issue at paragraphs 34 to 37 of your first statement, and at paragraph 34, you describe the result of the issue which arose to have been that multiple instances of one transaction could appear on a spreadsheet generated as part of the ARQ process and it would not be clear that they were the same transaction. Is that an accurate summary?

Gerald Barnes: Exactly. That was the problem, yes.

Ms Price: Could we have on screen, please, FUJ00172183. This is a PEAK with reference PC0200468. The summary reads:

“Horizon Audit Spreadsheet Producing Duplicate Transactions.”

There is an impact statement, dated 23 June 2010, which says this:

“From Penny – In a nutshell the HNG-X application is not removing duplicate transactions (which may have been recorded twice on the audit server) and they are appearing in the ARQ returns. For the old Horizon application Riposte automatically removed duplicate entries. An initial analysis shows that one-third of all ARQ returns (since the new application has been in play) have duplicated transactions.”

Going then to the entries in the log themselves, the second entry is dated 21 June 2010 and is made by Penny Thomas, and she says:

“While performing an audit retrieval for branch 072128 duplicate transactions have been found on 3 June ‘09. Initial analysis shows that duplicate records are held in 2 different audited TMS files.”

Then scrolling down to the final entry on this page, please, 22 June 2010, this is an entry made by you, and we have this:

“The processing is done by QueryDLL.dll. The way it works is that it processes all the results in a given file, building up an internal table of transaction sequences for that file. Then at the very end of processing the file it dumps the internal table to the RFIQueryFileSequence table. It does not cross-check the transactions in one file against those in another file.”

You say that:

“Two solutions are possible.

“The ‘easy solution’.

“As each transaction is processed a check is made with the RFIQueryFileSequence table and if it is already there the transaction is ignored writing a warning to the query log. The problem with this solution is that a query needs to be made to the database for every transaction.

“The ‘more difficult solution’.

“The internal table which at the moment is built up on a per file basis is changed to being built up on a per query basis. The check for duplicate transactions is then done within the internal table. This is a much more thorough approach but will take much more work.”

Then there is a further entry from you, also dated 22 June, which outlines the detail of the fix for this problem, and about halfway down the page we have “Impact on user”, and “Impact on User” says:

“Occasionally duplicate transactions are listed in the spreadsheets produced and presented to court for prosecution cases. These can give the defence team grounds to question the evidence.”

Then further down, in response to the question, “Have relevant KELs been created or updated?”, you say:

“No KELs have been created for this since we intend to fully resolve the issue shortly.”

If we scroll down, there are risks that are outlined of releasing and not releasing a fix, and here you say:

“If we do not fix this problem our spreadsheets presented in court are liable to be brought into doubt if duplicate transactions are spotted.”

Going over the page, please, the entry of 23 June 2010, from Penny Thomas:

“Initial analysis of all ARQ returns since the HNG-X application has been implemented identifies approximately one third (of all returns) have duplicate entries. This is now extremely urgent.”

Scrolling down, please, towards the bottom of the page, there is an entry on 7 July 2010. Right at the bottom of the page, that last entry, it says:

“Fix Released to PIT.”

Can you just help with what that means?

Gerald Barnes: Yes, that’s the team which generated the actual thing that was automatically delivered to the various platforms. In this case, it would have been the audit server.

Ms Price: Then over the page –

Gerald Barnes: Post Office Integration Team, possibly. I can’t – I’m just trying to think what – maybe Post Office Integrate – I’m guessing – or Pathway Integration Team. Something like that.

Ms Price: Then over the page, please, to the second entry of 7 July 2010. It says here:

“PEAK has been test installed in Integration.”

What does this mean, please?

Gerald Barnes: Well, integration, that’s the PIT team, that they actually produce – so development produce V baselines, with the fix in, and that goes to the integration team who produce D baselines, which are ready for automatic deployment on the various platforms which, in this case, would be the audit server.

Ms Price: There are then a series of entries which follow before the final entry on this page, dated 1 September 2010, made by Penny Thomas, which notes:

“Fix successfully deployed, closing call.”

So it seems from this PEAK that, although the issue was being raised in June 2010, it was not the subject of a fix until 1 September 2010; is that right, from what we can see on this log?

Gerald Barnes: Well, that’s certainly when Penny could see it was successfully deployed. Well, roughly speaking, yes. It’s the – you’ve got 30 July, John Rogers tested successfully, completed and documented in LST. Yeah, it takes – after it gets tested in LST, it doesn’t usually take long before it’s deployed to live, so from 30 July to 1 September, um, sounds a bit of a long time, but anyway that sort of thing, yes. That sort of thing.

Ms Price: Can we have on screen please FUJ00171848. The reference on this PEAK is PC0205805. The summary is:

“Audit – Duplicate Message sequences and not recorded by Fast ARQ retrieval.”

The call was opened on 27 October 2010, if we can scroll down a little to see that. Before we turn to the detail of the PEAK, can you explain, please, the difference between fast and slow ARQs?

Gerald Barnes: Right, well, the slow ARQ was the original method, which was a sort of more labour intensive system for the operator but more flexible, in which, first of all, they had to supply the FAD, the date range and the files they wanted to get back, and then they went to another screen to supply the date range they wanted to filter the FAD for, and then they went to another screen to supply the query they wanted to run on the FAD.

So it was all quite doable but someone called Steve Meek, whose name has come up before, automated this process, so you had one screen, which you have the FAD, the date range and, also, an optional number of extra days for files gathered late and you just set it going with one form. So it was just quicker for the operator, the fast ARQ was quicker for the operator.

Ms Price: The issue addressed in this PEAK arose in the context of fast ARQs; is that right?

Gerald Barnes: That’s correct, yes. Well, that’s what it says. I mean, that’s what it says.

Ms Price: The “Impact Statement”, dated 5 November 2010 appears to have been entered by you and says this:

“The Fast ARQ interface does not provide the user with any indication of duplicate records/messages.

“This omission means that we are unaware of the presence of duplicate transactions. In the event that duplicates are retrieved and returned to POL without our knowledge the integrity of the data provided comes into question. The customer and indeed the defence and the court would assume that the duplicates were bona fide transactions and this would be incorrect. There are a number of high profile court cases in the pipeline and it is imperative that we provide sound, accurate records.”

Looking then, please, to the entry at the bottom of this page, dated 1 November 2010, again this is an entry made by you. It says:

“Andy and I have looked at this. We think the method most compatible with existing behaviour is as follows –

“Check for duplicates for HNG-X in a similar method to how duplicates are checked for in Horizon.”

Do you mean Legacy Horizon?

Gerald Barnes: That’s right, that’s correct, yes.

Ms Price: “For Horizon they are legitimately logged in the audit log and then are ignored (because it is just that identical medicines as stored by mistake in more than one transaction file). For HNG-X, in the Fast ARQ case, their detection will cause them to be logged in the QueryLog and a count kept of how many there are; they will not be ignored.”

Then you go on to detail a proposed fix.

You made a further entry on 5 November 2010, which reads:

“I have built a prototype QueryDLL.dll which solves this problem. Now if duplicate HNG-X messages are detected the Fast ARQ fails at the client with the message ‘filtering failed’ displayed at the bottom of its form and on the server in the QueryLog there are detailed messages describing the duplicates found.”

Then there is a further, more detailed entry from you, also dated 5 November 2010, where you provide a technical summary, and you say:

“HNG-X can rarely produce transactions with duplicate Journal Sequence Numbers. At the moment, when running a FAS ARQ on the audit server, these duplicates are not noticed. This means that the evidence presented by the Prosecution Team may show duplicate transactions without it being noticed; the Defence Team may spot this and call into question the integrity of our data.”

Scrolling down, please, to the bottom of the page “Impact on User”:

“HNG-X transactions with duplicate JSNs may not be noticed. This will call into question the reliability of evidence present by the prosecution team.”

Going to the top of the next page, please, on “Have relevant KELs been created or updated?”:

“It was not felt that a KEL was required because there are only two people in the prosecution team and they are both fully aware of the problem.”

Who were the two people in the prosecution team; can you recall?

Gerald Barnes: Well, Penny Thomas would have been one.

Ms Price: Do you recall Andy Dunks?

Gerald Barnes: Well, he certainly was there at one – well, still is, still is – might have been Andy I was referring to I can’t – quite likely. I would say quite likely but I couldn’t say with certainty. The only one can say with certainty is Penny Thomas.

Ms Price: Whose decision was it whether a Known Error Log was created?

Gerald Barnes: Well, I think that’s – if I’m writing this, then it’s me, really, I suppose. You’ve either got to produce something in the KEL or you’ve got to give an explanation of why you’re not producing something in the KEL. Well, my explanation is quite simple: there’s only two people and they’re well aware of it anyway, so there’s no point having a KEL entry because they know about the issue.

Ms Price: This PEAK was opened around two months after the last PEAK we looked at was closed, following a fix on 1 September 2010. Is the issue discussed in this PEAK the same as that discussed in the previous PEAK?

Gerald Barnes: Yes, that’s right. That’s right. It would have been – the same fix as applied before, I think, would have been present here. So you get – whenever you do the ARQ, you get – the query handler log is always generated. That would have listed the duplicates here anyway. I can’t quite remember what we’re fixing here. Not too sure exactly what we were doing. But –

Ms Price: Can you help – apologies. I interrupted.

Gerald Barnes: Well, yeah, I’m not – what – I’m not sure that Andy and – I’m not sure exactly what change would have been going on here. Does it say somewhere exactly what we were going to do to address this issue?

Ms Price: Well, the – going a little further back up the page, if we can just stop there, just casting your eye down there, does that help you at all? It says, for example:

“Does the fix require any manual deployment baselines?

“The fix does not require any manual installation; it would just be a replacement file …

“The coding of the fix is complete, however further regression tests still need to be run.”

Gerald Barnes: Okay, well, I can’t remember exact – all I know is what happens right now. The intermediate steps of how we got there, I’m not too sure of.

Ms Price: Putting it another way, why was there still an issue if a fix had been implemented on 1 September?

Gerald Barnes: I’m not too sure, to be honest. I would have thought on the first fix we’d logged all the duplicates in the query handler log. That would have been the case here, whether we’re doing something more. Maybe, what it might have been, is we simply – ooh. Yes, it could be that we actually got the Fast ARQ to actually say: right, duplicates have occurred we’re stopping. It could just have been that the Fast ARQ just failed, and you’d get just some message saying “Look in the query handler log” and then you would see the duplicates listed.

It might have been that I can’t, from memory, remember exactly what the fix was. It might have been that. So maybe it just stopped running and refused to produce anything.

Ms Price: Scrolling down, please, and over the page, looking for an entry on 24 November 2010.

Gerald Barnes: Oh, Andrew has written something.

Ms Price: That entry there, 24 November 2010, Andrew Mansfield:

“Sarah Selwyn has requested an audit maintenance release prior to the next DC_AUDIT planned release due to go live on 14/05/2011.

“Five PEAKs are requested for this maintenance release [and they’re listed].

“This is an edited version of the text of Sarah’s original email to Sheila Bamber:

“We would likely get these PEAKs targeted ASAP since these are impacting SSC and the Litigation Support Group in their support of the Post Office litigations. There is a risk that these teams will not be able to fulfil their OLTs to the Post Office as defined in SVM/SDM/SD/0017 …”

In terms of when the issue was fixed, you refer to another PEAK at paragraph 37 of your first statement, which you say suggests that the issue with duplicate transactions was fixed in or around November 2010. Could we have that on screen, please. It’s FUJ00171892. The reference for this PEAK is PC0205353. The summary reads:

“LST – Audit – Duplicate message sequences are not reported if they are identical.”

This not a PEAK log which contains entries made by you but, since you refer to it in your statement, can we take it that you have reviewed it?

Gerald Barnes: Yes, yes, yes, yes.

Ms Price: The “Impact Statement” here says:

“It is important that any duplicate messages in the retrieved audit data are highlighted to the user.

“Duplicates are not being highlighted when two message sequences have the same start and end message sequence numbers.”

So it gives somebody examples: sequences X to Y would not be reported as duplicates; sequences X to Y would report a duplicate:

“This is a very serious issue. We experienced the presence of duplicate Horizon transactions which were not removed when the HNG-X application was introduced. POL did not accept a manual workaround and the ARQ service basically stopped for almost 2 months.

“The issue contained in this PEAK came to light on 21 October and I have instigated the creation of a macro which will identify if duplicated transactions are contained within a spreadsheet. We will need to generate an additional spreadsheet containing the JVN and check for duplicates by using the macro. This will increase our workload by 15-20 minutes for each ARQ containing HNG-X transaction records.

“The real problem will arise if we do identify duplicate transactions because POL is not likely to accept a workaround for transaction records used for Litigation Support.”

That statement is dated 25 October 2010. Is this duplicates issue as the Fast ARQ –

Gerald Barnes: No, no, it’s a much more – it looks like a much more specific issue. It’s – “When two message sequences have the same start and end message sequence numbers”, so it looks like some very specific issue that Andy Mansfield fixed.

Ms Price: If we could go to page 5 of this document, please, top of the page, an entry on 25 November 2010 reads:

“Cleared in release 3.13 (Audit System) and tested in LST under release notes HRU7206 and HRU7239.

“Closing call.”

Was this the entry which led you to believe that the duplicate transactions issue had been resolved in or around November 2010? This is the document reference –

Gerald Barnes: Yes, I think so, I mean, I get a bit confused with the exact timescale but what happens right now is that spreadsheets we send to the – and that has been for a long time – is that the spreadsheets we send always have in a summary sheet details of all the duplicates and gaps. So each spreadsheet we submit has all that information in the spreadsheet. That’s the present and been that way for a while, but getting there has been a slightly not so – we haven’t got there in one go, as it were. That is the way it’s been for a long while.

Ms Price: The last entry we looked at in this was on page 4. Apologies, if we can have back on screen, please, FUJ00171848. Looking at page 4, please. Going back to Andrew Mansfield’s entry dated 24 November 2010, that was the last entry that we looked at.

Going, please, to the top of the next page to page 5. There’s an entry from you, dated 3 December 2010, saying:

“A fix will now be prepared and tested. It will then be stored in VSS-InfDom. It will be handed over on 24 December.”

There is then an entry on 14 December from you and you say:

“It has now been decided that the detection of duplicate HNG-X messages will not terminate the FAST ARCs.”

Is that supposed to be ARQs?

Gerald Barnes: That’s right, yes.

Ms Price: “Duplicates will be logged by QueryDLL at the server initially in the QueryHandler.log and eventually in the close log both for Horizon and HNG-X transactions. Duplicate HNG-X transactions will also be logged by the client in its spreadsheet but duplicate Horizon transactions will be eliminated at the server silently since they are known always to be benign.”

Then on 29 December the entry from you, we have:

“Fixed by version of NWB_Legato_Recover.exe and version of QueryDLL.dll handed over in AUDIT_EXTRACT_SVR [and the reference].”

Going to the bottom of the page, please, on 19 January:

“Tested in LST as part of Audit Release 3.24.

“Duplicate message sequences are now recorded in the Query Handler and Closure (RFI) log files, for both Slow and Fast ARQs.”

There are some further entries over the page and a final entry on 27 April 2011, which reads – there are two entries here, John Budworth first of all:

“Applied to live 03/04/2011 as part of Audit Release 03.24.”

Then we have “CALL … closed” entry by Penny Thomas on 27 April 2011.

Having looked at this document, does it remain your understanding that the duplicate transactions issue was fixed in or around November 2010 or do you think that may have been later?

Gerald Barnes: It looks like it might have been later, actually, yes, from this. A complete fix, yes, it looks like it might have been later.

Ms Price: At paragraph 36(b) of your statement you address another problem which arose in relation to fast ARQs relating not to duplicate transactions but to missing transactions. Could we have on screen, please, FUJ00171894. The reference for this PEAK is PC0207787, the summary is:

“Audit – Transaction Gap info overwritten in Summary worksheet.”

The “Impact Statement” written on 18 January 2011 says:

“The problem will only occur in exceptional circumstances but should be fixed in case the exceptional circumstance happens.

“If it does occur, transaction gap information is overwritten in the results spreadsheet and we would not be able to send the ARQ to POL. We would probably attempt to resolve the cause of the gaps or duplicates before sending the output to POL in any case, but the problem really ought to be fixed.”

Recognising that this is not a PEAK log into which you made entries but one you commented on in your statement, is this problem a distinct one from the duplicates problem we’ve been looking at –

Gerald Barnes: Um –

Ms Price: – or part of the same problem?

Gerald Barnes: No, this is – this appears to be about gaps. So you’ve got duplicates and you’ve got gaps. So all the messages written have a message number and you can have a duplicate, but also, very rarely – and it is very, very rare, actually – you can have a gap, ie no message at all. This is referring to gaps, which – duplicates are quite common, particularly for – well, actually duplicates were quite common for Horizon but not for HNG-X, really, I don’t think.

Gaps weren’t common for anything but you’ve got to check for them and this seems to be saying that there was some problem with reporting gaps.

Ms Price: Going, please, to page 4 of the document, there’s an entry of 20 June 2011 there:

“PEAK has been test installed in integration, routing back to source.”

Then there are three entries on 30 June 2011, one from Mark Ascott saying, “Successfully tested by LST”, and one final one closing the PEAK.

Can you help with whether there was a further fix in relation to this issue, in addition to the fix implemented for duplicate transactions, or is this part of the same thing?

Gerald Barnes: Well, I assume this is a separate issue. I assume this is a separate one.

Ms Price: In your first statement you addressed a number of other PEAKs which you consider may have had an impact on the audit log. I don’t propose to take you to all of those, but one of these relates to an issue arising in April 2013. Could we have on screen, please, FUJ00226106. About two-thirds of the way down the page, there’s an email from you to CSPOA Security. Can you just explain please which team that was?

Gerald Barnes: That’s Cyber Security Post Office Account, which is – which is the same as the – well, it’s what I sometimes call the prosecution team, sometimes it’s – I suppose it’s not – I suppose – it’s part of the same thing, yes. It’s the same – Penny, et cetera, at the time.

Ms Price: It is copied to Rajbinder Bains and Andy Dunks, among others. It is dated 15 April – apologies, if we can go down, please, that email below, 15 April, an email from you. The subject line is “Possibility of missing transactions”, and you say:


“A serious flaw has recently been spotted in the audit code. It was introduced in the fix to PC0187097 quite some time ago (but post-HNG-X). There is a small possibility of missing transactions on generated spreadsheets if the query handling was run during the evening Query Manager shutdown. Please raise a priority PEAK on this issue and send it to Audit-Dev.”

Mr Dunks replied to you the next day, a bit further up the page, please, saying this:


“Can you confirm that we’re talking about as far back as September 2009?

“Are you able to pop down and explain and show us what we are to look for, as we will need to put together some time scales to complete this task.”

Then above, you reply and say:

“I will come down in a few minutes.”

Just to be clear, the issue you were flagging appears to have the potential to lead to transactions missing from audit data provided to the Post Office by Fujitsu; is that right?

Gerald Barnes: Yes, that’s correct. Though, in fact, it would be quite likely – it might be noticed by our gap checks because, if something was getting missed, unless it was at the very beginning, at the very end of the range, it would be noticed by the gap check but, yes, potentially. Potentially.

Ms Price: Was Andy Dunks right when he was asking if this could go back as far as September 2009? Do you know how far it could have gone back?

Gerald Barnes: Well, not – not offhand. You’d need to look at the – when this PEAK I cross-referred to got fixed, I suppose. But, well, if he said that, I suppose possibly – possibly, although without going into detail, I couldn’t say.

Ms Price: Could we have on screen, please, FUJ00173057. This is PEAK reference PC0225071. The summary is “Possibility of missing transactions on ARQ audit spreadsheets”, and the “Impact Statement”, dated 12 June 2018 is entered by you. It says this:

“There is a loophole in the code of QueryDLL.dll whereby if it is running during the evening service shutdown the resulting prosecution spreadsheets produced later may have missing transactions.

“There is a tiny possibility that errors in the QueryManager service may not be reported meaning that invalid prosecution spreadsheets may be produced.

“There is a possibility of errors being generated when audit queries are being run and the QueryManager service is shutdown and restarted. This wastes the time of the prosecution service and makes them rerun queries. This makes achieving SLAs more difficult.”

You appear from the log entries in this PEAK to have been involved in investigating and finding a fix to this problem. Looking on page 2, please, to an entry of 16 April 2013, there is a reference – going back up, please – there’s a reference to a meeting held the day before with Adam Spurgeon, Alan Holmes and Steve Goddard. Can you help with which teams these individuals were in?

Gerald Barnes: Alan Holmes was the Manager of the Audit Team at the time – not the Manager, was the Designer for the Audit Team at the time. Steve Goddard and Adam Spurgeon were Managers.

Ms Price: Scrolling – if you can’t assist any further –

Gerald Barnes: Well, I thought I’d answered the question.

Ms Price: I’m sorry, I thought you were continuing.

Gerald Barnes: No, that was it. Yes, there was – so Alan Holmes was the Designer, and the other two were Managers.

Ms Price: Scrolling –

Sir Wyn Williams: Before we get into this document, can I tell you, Ms Price, that I can’t go on beyond 4.30 today. So, since we’re getting reasonably close there, I think we’d better take stock about what’s happening.

Ms Price: Yes, sir. The witness is available to attend tomorrow morning to finish his evidence, should that be necessary. I was going to stop after this topic to see, sir, whether you wanted to sit a little later or to continue tomorrow.

Sir Wyn Williams: Well, I think we are going to have to continue tomorrow by the sound of it. So just choose a suitable moment between now and 4.30 to round off then, all right?

Ms Price: Yes, sir.

Scrolling down, please, to the last box on this page, the 16 April entry, you say that the affected platform was the audit server and the technical summary was:

“A loophole has been found in QueryDLL.dll whereby if it is running during the evening shutdown of the QueryManager service the prosecution spreadsheets produced later may have missing transactions.

“In addition the design ethos at the moment of QueryDLL is that on shutdown a failure state is indicated. This is to be changed to there being a rerun of the query after shutdown which would have prevented this problem in the first place, although there would still have been a problem if a genuine error rather than a shutdown had occurred prior to the faulty code which masked the earlier state.

“As well as that and as a precaution the error handling of QueryDLL.dll is going to be looked at and improved.”

The “Impact on User” is dealt with a little further down, going to the top of the next page:

“The prosecution spreadsheets will be more reliable after this fix …”

There is then an entry towards the bottom of this page on 24 May 2013 made by you. In this, you deal with completion of initial testing using a debug version and attach your test plan. You say:

“Unfortunately 7.22 has been superseded by an 8.01 release and so the fix will need merging … There has been a debate about where exactly this shall be released.

“Whilst investigating the original problem the following problems are fixed in QueryDLL.dll.

“The original major problem that transactions would go missing silently from spreadsheets if an evening QueryManager shutdown occurred at a particular point.”

You go on to explain the other aspects of that.

Over the page, we see the fourth entry down, dated 12 June 2013, is made by you:

“Andy Dunks has stated that he is prepared to only run audit queries in the day to prevent the possibility of audit transactions being missed from spreadsheets due to a bug in the code that handles the overnight shutdown of the QueryManager service.

“I am therefore proposing this PEAK for the 9.28 maintenance release.”

This is two months after the issue first arose. Was this the first point at which the process for running audit queries was modified to avoid the risk of spreadsheets being affected after the issue was raised in April 2013?

Gerald Barnes: I suppose so, yes, so that is right. Though Andy had checked all – so you could tell whether there had been an evening shutdown by looking in the QueryManager log, and Andy had checked them all, I believe, and so we had taken checks to make sure everything was okay, as I understand it. But that’s right, yes.

Ms Price: As far as you can see from this log and as far as you are aware, was Post Office told about this issue, either by this point, or before the fix in November 2014?

Gerald Barnes: I don’t know the answer to that, I’m afraid. I don’t know that.

Ms Price: The last entry on this page is dated 12 June 2013 and is made by you. Your “Technical Summary” is:

“A thorough review of the QueryManager service has been conducted. One major bug has been found which could result in prosecution spreadsheets having missing transactions if the QueryManager service is shutdown and restarted.

“In addition, many less serious issues have been found with the QueryManager service.

“There is a tiny possibility that if an error occurs it will not be reported.

“The evening shutdown can cause queries to fail that would otherwise have worked.

“These issues are all fixed.”

Is it right that your finding reported here, albeit you say these issues are now all fixed, was that an error could have occurred and not have been reported?

Gerald Barnes: That’s what it says, so it must be the case. Yes, that’s what it’s saying.

Ms Price: This is the same concern you had expressed in 2007, was it not, around error handling, that the code should be written in a way that prevents silent failures?

Gerald Barnes: That’s right, exactly. But I thought the query handler – as I say, I didn’t write it all myself. It was something that had been written by a team. I thought it was much better a rehandling than what I saw at EPOSS. Though, as with all things, you can always have little gaps, little mistakes, but I thought in general it was better. It had obviously been designed to trap errors from the word go, this service, and they missed little points but it’d basically been designed to trap errors from the word go.

Ms Price: Going over the page, please, about a third of the way down, you address the risks of not delivering the fix. Scrolling up, perhaps:

“RISKS (of releasing and not releasing proposed fix):

“If this fix is not delivered, there is the possibility that incorrect prosecution spreadsheets will be produced.

“If this fix is not delivered some prosecution spreadsheet production runs will fail if the evening shutdown occurs in the middle of them.”

Did you recognise, at the time, how significant a problem this might be?

Gerald Barnes: Yes, well, that’s right. Yes, definitely. That’s why we took a lot of – did a lot of checks. That’s right.

Ms Price: Did you recognise the risk that incorrect data might be presented in support of Post Office prosecutions?

Gerald Barnes: Absolutely. But, as I say, a lot of steps were taken to check this hadn’t happened.

Ms Price: In any of your conversations with Andy Dunks, do you recall him talking about the significance of the problem and the risk that incorrect data might be presented in support of Post Office prosecutions?

Gerald Barnes: Well, he just agreed that he was going to run the checks that were suggested to him, to – well, to make sure that his spreadsheets hadn’t been reduced in the evening shutdown. Because you can always tell, looking at the query handler log, whether this had happened or not.

Ms Price: Do you recall being told about any discussion of this issue with the Post Office?

Gerald Barnes: No, I never – that never – that never – no, I didn’t – conversations about the Post Office never really directly got to me, I don’t think.

Ms Price: This was an issue which had first come up in June 2013. Can you assist with why it took until November 2014, the date you give in your statement for the issue being fixed, for that to happen?

Gerald Barnes: Which statement? Which section of the statement?

Ms Price: Looking at your first statement –

Gerald Barnes: I think I do recall it.

Sir Wyn Williams: It’s paragraph 38(b) and it’s about six lines from the bottom.

Ms Price: Thank you, sir.

We can look –

Gerald Barnes: 2014 – oh, right.

Ms Price: We can look, if it helps, to the last entry in this PEAK, just going to the last page.

Gerald Barnes: December 20 – well, okay. It was finally closed in December 2014. Ah, but, ah, ah, but just a second, 19 November 2014, it’s got “[Software] Fix Available to Call Logger”.

Ms Price: If it assists, scrolling up a little, the entries immediately above.

Gerald Barnes: Yes, so it looks as if it’s been released – the fix was released in November 2014, it’s just that Jason has closed the PEAK in December 2014. So I think my statement is actually correct. Yes, subsequent – my statement says that “Subsequently deployed in or around November 2014”, which is what the statement by Lorraine Guiblin means, 19 November 2014.

Ms Price: So –

Gerald Barnes: “[Software] Fix Available to Call Logger.”

Ms Price: Looking back to that original date that the issue was raised – I misspoke earlier, it was 16 April 2013 – can you help with why it took until November 2014 for a fix?

Gerald Barnes: Oh, right, I’m not sure. I can’t remember now. I don’t know why it took so long. That seems quite a long time, certainly.

Sir Wyn Williams: Right, we’ll have to take that up further tomorrow, if necessary.

Ms Price: Sir, that was the last of my questions for today.

Sir Wyn Williams: I thought it would have been but I gave you the opportunity to have another go. All right.

Well, I’m very sorry, Mr Barnes, that you’ll have to return tomorrow but I’m grateful to you that you’ve made yourself available to come tomorrow. Forget about this case tonight, if you possibly can, don’t talk about your evidence and come ready for a much shorter session, I hope and suspect, tomorrow morning. Thank you.

The Witness: Thank you.

(4.30 pm)

(The hearing adjourned until 10.00 am the following day)