Measuring External Duration of Endpoints


We performed load testing a of new application with a client recently and a recurring question repeatedly came up: “How long was the transaction in OLS.Switch and how long was it at the endpoint ?”

It is an important question – one that is used to monitor application performance as well as to assist in troubleshooting purposes – and one we can clearly answer – the transaction took – a total of 5.6 seconds – and we waited up to our configured endpoint timeout of 5 seconds before we timed-out the transaction. Or – the transaction took 156 ms – 26 ms of those against a local response simulator.

In our application we use a profiler to trace execution time of each of our Transaction Participants: In which we see in our application logs as:

A normal transaction:

  open [0/0]
  parse-request [7/7]
  create-*******-tranlog [9/16]
  populate-********-tranlog [1/17]
  validate-********* [42/59]
  validate-********* [1/60]
  validate-******** [0/60]
  create-*********-request [24/84]
  query-****** [26/110]
  prepare-**********-response [40/150]
  close [6/156]
  send-response [0/156]
  end [157/157]

A timed-out transaction:

  open [2/2]
  parse-request [23/25]
  create-*******-tranlog [91/116]
  populate-*******-tranlog [1/117]
  validate-******* [67/184]
  validate-*******-card [31/215]
  validate-************** [1/216]
  create-********-request [32/248]
  query-******* [5000/5248]
  prepare-***********-response [67/5315]
  close [284/5599]
  send-response [0/5599]
  end [5600/5600]

(* note these traces are from a test app running on my macbook and are for illustrative purposes only *)

While we can answer the question by reviewing application logs – it is harder to perform any analysis on a series of transactions, specifically for external duration. We can do currently for total duration, however – this is valuable from the device perspective for how long a transaction took to process.

Logging the external duration along with our total duration for switched-out transactions and we now have:


OLS is PCI Compliant


Just a short note to share that OLS has received word from our QSA via a “PCI Certificate of Validation” Letter for our newly launched hosted payment service offering OLS.Host.

Congrats to our Operations, Systems and Security Gurus for all of their hard work on this !

CaseSwitch – Source Port Routing

We have implemented a new component to our Java and jPOS fueled Payment Switch – OLS.Switch which we have called the CaseSwitch. The vast majority of our switching algorithms are based on either the determination of CardType – which dictates which outbound endpoint we send that transaction to, or on Card Bin Ranges.

An example of a Bin Range:


If a CardNumber’s Bin or IIN – matches our Bin Range configurations – We will select the appropriate EndPoint. In this example if we have a VISA or MC Card we switch it out to a FDR Gateway. If we were connecting to a to MasterCard MIP or Visa VAP or DEX then we would have a MC and VISA EndPoint defined with our BankNet and VisaNet interfaces and switch the transactions to those endpoints.

An example of a Card Type:

We have certain transaction types that we know where they go because of their Card Type – Many of these are internal authorization hosts such as implementations of Authorized Returns, MethCheck, Loyalty, Couponing. Others are transactions where the transaction type also dictates the card type – such as those to GreenDot, InComm and other external hosts where a BIN Range lookup is unnecessary.

Source (Port) Based Routing

We recently had a requirement for Source-Based Routing – where depending on the source port that would dictate the outbound transaction path(s).

In our Server we accept the incoming transaction and then place a Context varaible we call PORT that tells us which Server Port the transaction came in on. One we have that additional data we can perform a Logic Branch in our Transaction Manager that looks like this.

This allows us to define transaction paths based on the incoming port of the server, so in this example.

<participant class=”com.ols.switch.CaseSwitch” logger=”Q2″ realm=”Switch”>
<property name=”switch” value=”PORT” />
<property name=”case 5001" value=”LookUpResponse Log Close Send Debug” />
<property name=”case 5002" value=”QueryRemoteHost_xxx Log Close Send Debug” />
<property name=”case 5005" value=”QueryRemoteHost_yyy Log Close Send Debug” />
<property name=”default” value=”Log Close Debug” />

Port 5001 – we perform an authorization locally

Port 5002 – we switch out the transaction and reformat it to endpoint xxx – message format and interchanges requirements.

Port 5005 – we switch out the transaction and reformat it to endpoint yyy – message format and interchanges requirements.

Signed Overpunch or Zoned Decimal or what are these weird characters in numeric fields ???


We interface to many different systems and sometimes we get to talk to IBM Mainframes or message formats that uses Signed Overpunch

Where we see numberic values like “100000{” , “100999I”, or “100495N”

Signed Overpunch is used in order to save a byte the last character can indicate both sign (+ / -) and value.

These types are defined in COBOL Copybook this looks like:


which equate to :

100000{ = 100.0000

100999I = 100.9999

100495N = -100.4955

Here is a snippet of Java Code that we use to handle this:

    public static final char[] gt_0 = { 
        '{', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I' 
    public static final char[] lt_0 = { 
        '}', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R' 

   protected static String convertToCobolSignedString (String aString) {
        int aInt = Integer.parseInt(aString);
        char[] conv = (aInt >= 0) ? gt_0 : lt_0;
        int lastChar = (int) aInt % 10;
        StringBuffer sb = new StringBuffer (Integer.toString(aInt));
        sb.setCharAt (sb.length()-1, conv[lastChar]);
        return sb.toString();

Velocity Manager and Velocity Profiles

I recently put together a document that describes our Issuer implementation of Transaction Velocity Checks during the authorization process. We use a facility called the Velocity Manager to implement authorization rules that are based on frequency of transactions over a given time period. Velocity profiles can be used to implement extensible velocity-based logic.

Here is the data-structure that defines our Velocity Limits:

Velocity Profile.png

Here is a snapshot of a configured Velocity Limit based on Accumulated Transaction Amounts:

500 limit.png

Here is a snapshot of a configured Velocity Limit based on Transaction Counts over a given period:

10 txns.png

The Freeze

2866522209_02285d877c.jpgThe “Freeze” is upon us in the Payments Space. Any and all system changes should of been made and time spend making sure your payments systems and infrastructure can handle the Black Thursdays, Fridays, and Holiday Season should be complete and in monitoring mode. Rather then expand more here – Andy Orrock wrote a excellent piece on this last year: Everybody Freeze!

In the Payment Systems World, we’ve now entered “The Freeze.” This is the industry-wide term associated with the period running from approximately the Thursday before Thanksgiving (which is the last Thursday of November in the US) through to about the second full week in January. During that period, we don’t do any production releases, unless it’s a fix of a critical nature. We advocate the same practice for our clients. We also recommend that they not undertake material changes to hardware configurations, databases, scripts, or any other piece of supporting or underlying technology.

Traditionally, it’s been a good time for us to focus on big projects that have a Q1 delivery date. We can stay under the covers and make some serious progress on those bigger initiatives.

We also send out a customer letter re-emphasizing how to get a hold of us. The letter stresses the importance of vigilance and watchfulness over key production systems. It reminds our clients that we’re Always Available. Our firm was founded on the back of 24x7x365 support of mission-critical production systems. We get paid to make sure our clients can – to the fullest extent possible – enjoy the holidays with their family knowing that we’ve got their back on support.

Why all the heightened concern? It’s the nature of payment systems: there’s tremendous upsurge in volume in the freeze period. If you’ve got a latent bottleneck laying dormant and ready to strike, the unfortunate reality is that it’s going to nail you right between the eyes on a killer day like Black Friday, Cyber Monday or Christmas Eve. We service some Stored Value authorization endpoints that get massive 20x surges in volumes on December 24th. So, you’ve got to be ready.

We work the other ten-and-a-half months of the year to make this month-and-a-half as uneventful as possible.

Now you’re the Switch — Successful Implementation Strategies

My colleague Andy Orrock wrote a blog post titled “Now, you’re the switch” where he summarizes challeges and witness poor implementions of some interfaces that we connect to:

But here’s the thing: once we send the transaction to you, now you’re the switch.  What I mean by that is:  your application is now beholden to the same throughput, speed, efficiency, extensibility and 24x7x365 availability concerns that define our lives.  And while there have been many that have been up to the task, there have been countless other instances where that’s not been the case.

There are are few basic models that we have seen that work well:

  1. OLS Implemented Business Logic based on customer developed prototypes.
  2. OLS Implemented Business Logic and Customer provided Database or Flat File update feeds that drive authorization decisions. 
  3. OLS Transaction Participants that call Customer Provided Software and CustomerSecretSauceManger.process() methods within our processing framework.
  4. Under our guidance, interface remotely to an local endpoint via TCP/IP Sockets or WebServices, handling concerns that we address here with tech savvy customers.

Pre-Authorization data for completions and reversals and removal of Track II Data

Late last week I received a email detailing a few message format specification changes for a processing gateway interface that OLS.Switch connects to. It discussed changes to the required data elements required for “match-up”, the data required in the original transaction that one must return in a reversal response message or in completion messages.  We leverage Host Reversals (note: these are not the same as refunds) when we don’t recieve a response from an authorizer for remove authorization, most typically on a time-out scenarios. We don’t know if the processor accepted the transaction and we just didn’t receive the response or if the processor never received the original transaction at all. In cases like these we are obligated to send reversal messages, to reverse the transaction. In a credit world where there are large open-to-buys and credit limits and expiring authorizations, this is less of a deal with debit. In the debit world mistakes here case pain for cardholders and duplicate charges occur.  We SAF (Store and Forward) our reversals and retry for a set period of time with delay intervals until we receive an acknowledgment that our reversal transaction was received. Completions can be forced sales or post authorization transactions, or in certain industries completions with updated amounts that differ then the authorization (Think restaurant tip and gas pumps transactions here.)

The changes are listed below:

Track 2 data is no longer required for completion, reversal or void transaction types. With the elimination of the Track 2 Data (Field ID 35) these transaction types will now require the Account Number (Field ID 02) and Expiration Date (Field ID 14) to be provided. All clients should make these modifications prior to your next PCI assessment.

This is great news, the is one of the last processing gateways that we interface with that required the ISO8583 Data Element 35 – Track 2 Data for reversals and completions.  If you ever noted the specific wording in the PCI DSS specification about “subsequent to the authorization” this is part of why I believe that wording was left there.  The issue with this is while SAF queues are normally located in memory – there are times when they can be configured to be persisted to disk (If they are not, think of the cardholder impact of duplicate charges) This is less data, specifically pre-authorization data that is required to be stored prior to the authorization. We have leveraged “encrypted spaces” to help protect all types of SAF Queues and are happy Track 2 Data is not required for match-ups any more – Ideally, and we hope that processors will take this a step further and remove the requirement of sending the PAN, and leverage other unique transaction identifiers or composite identifiers: Take a look at one our our “FindOriginal” Transaction Participants for a MasterCard Interface:

CriteriaImpl(TranLog:this[][date>=Thu Sep 17 09:11:41 CDT 2009, irc=1816, stan=000000087625, originalItc=100.00, acquirer=987654999, mid=123456789012345, tid=12234501, banknetReference=MQWWRJ4QW ])

No Card Number there !

PCI Council : Wireless Security Guides for Payment Cards

There are few news articles today that reference this article. – That talks about the PCI Council “Publishing” a Wireless Security Guide for Payment Cards

Update July 17/2009 – The Guide is now listed on the PCI Co Website and direct link is here.

From the Article these appears to be the relevant things from the Guidelines:

  • The guidelines requires “a firewall that demarcates the edge of the organization’s CDE – cardholder data environment
  • To combat the problem of the rogue access point, businesses will need to use a wireless analyzer or preventative measures such as a wireless intrusion detection/prevention system (IDS/IDP) regularly
  • The council is advising large organizations to set up automated scanning using a centrally managed wireless IDS/IPS system.
  • The guidelines suggest quarterly scans each year to detect rogue wireless devices that could be connected to the CDE at any location and have an incident-response plan to deal with them.
  • To isolate wireless networks that don’t transmit, store or process cardholder data, a firewall must be used, and it has to perform the functions of filtering packets based on the 802.11 protocol; performing stateful inspection of connections; and monitoring and logging traffic allowed and denied by the firewall according to PCI DSS rule 10. The firewall logs would have to be monitored daily and the firewall rules verified once every six months.
  • The wireless guideline also says “relying on a virtual LAN (VLAN) based on segmentation is not sufficient.”
  • For “in-scope wireless networks,” physical security should apply, with options that include mounting wireless access points high up on a ceiling and disabling the console interface and factory rest options by using a tamper-proof chassis.
  • Change the default settings of the access points in terms of default administrative passwords, encryption settings, reset function. Disable SNMP access to remote access points if possible. Do not advertise organization names in the SSID broadcast.
  • Use of AES encryption is recommended for WLAN networks. Specifically, information flowing through certain network segments, including secure wireless devices that connect to the private WLAN through the access points, must be encrypted.
  • Wireless usage policies should be established for “explicit management approval to use wireless networks in the CDE.” Usage policies require labeling of wireless devices with owner, contact information and purpose.