Skip to content


Where Did My Love For Data Start?

teambynet

 

Bynet Team Photo (left to right, top to bottom) Top Row:  John WrightJim (Hjerpe) Kaskade, Sumit Sharma, Dennis RussellBob McMillen, Bottom Row: Paul MichelettiLawrence LadaoSerdar YilmazBob MoussaviDoug Hundley.

Jack Shemer & David Hartke – True Legacies

Whenever Jack visited me, he used to leave sticky notes on my desk with nuggets of wisdom. For example, “Keep people you trust close to you.”…or, “Key values for Teradata were: Pride, Enthusiasm, Importance of the Individual, Teamwork and Open Communications, Ethics, Dedication, Quality, Support, Success, and Entrepreneurship.”

In the month of July, 1999, Jack Shemer and David Hartke both decided to come out of retirement to help me and my team start a new company, INCEP (along with a few other veterans of the industry including Art Collmeyer, Bob Adams, and Phil Paul). Little did I know, Jack would not only “give me my wings as a CEO”, but he started a process which ended up transforming me, creating the value system I use today.

“Initial Partner Presentation 1980, prior to Beta test in Dec, 1983″ was the note he wrote on his initial Teradata business plan, which I still have today. Inside was a copy of a less formal “Preliminary Business Plan” dated April, 1980. Jack (CEO) and Phil Neches (CTO) where both on the “payroll” then (with only $175K of seed capital later to be augmented with institutional money from Brentwood Associates run by Tim Pennington and Kip Hagopian).  Co-founders David Hartke (Engineering) and Jerry Modes (Finance) planned on leaving their current day jobs within then next month (after their first true round of financing). With funding they could bring the entire founding team together plus a few project leaders.

Their first milestone was the “demonstration of a complete, working hardware prototype” within 18 months (December 1981). Jack was asking for only $2.5M of initial venture funding to carry the team through milestone 1, and another $3M to get to the “first system ready for shipment to a customer” by December 1982 (month 30). They eventually closed a Series A of $2.6M on July 23, 1980, and subsequently raised $12M in December 1981, $12M in January 1983, and another $40M over three additional rounds in 1984, 85, and 86. Teradata IPO’d August 1987 raising $37.5M of public capital.

YNET

Few people know that the backbone of the Teradata Database Computer (DBC) was originally referred to as the HINET (“High Speed Network”), later renamed to the YNET, and then redesigned as the BYNET.

The DBC1012 was designed to attach to existing mainframe and mini-computers to provide a substantial increase in system throughput, response time, ease of use, and reliability….using a relational model DBMS. The target was to increase throughput by as much as 4 times that of IBM’s IMS, and support two orders of magnitude in data base size and processing power. [Note: I'll compare this approach and the current approach of Hadoop from providers like Hortonworks and Cloudera in a future post.]

The YNET was engineered to interconnect up to 1024 processor modules (Interface Processors or IFPs, and Access Module Processors or AMPs) in a distributed, shared nothing, Massively Parallel Processing (MPP) configuration. The YNET was originally envisioned to support broadcast and sorting, allowing for linear scalability (e.g. performance improvement does not degrade with added process modules). Believe it or not, the system was engineered to scale from 1.5 to 512 MIPS (yes, back in 1980 only 512 MIPS).

Fast forward, in 1990, a team was formed in a joint-development between NCR and Teradata. The project code name was “P90″ and it consisted of an elite team of 100 engineers from Teradata and 100 engineers from NCR, who were placed in an abandoned building in Torrey Pines, San Diego. Our charter was to “kill IBM” by producing the most powerful next-generation database system in the world.

At the time, the YNET still provided for communication among all processors (AMPs, IFPs, and COPs – the COP had the same functions as the IFP, but was used to communicate with network attached DOS-PC/UNIX hosts). The YNET always operated in a broadcast (one-to-all communication) mode and the two YNETs (primary and backup) had a total system bandwidth of 12 MBPS at the time.

It was well understood that Jack Shemer and David Hartke’s invention, the YNET, would easily support 200-300 processors using 80386 Intel CPUs (rated at 4 MIPS each). However, scaling above 512 next-generation processor modules (rated at 100MIPS each) would result in the YNET becoming a bottleneck (the network would become the limiting function of scale).

BYNET

So we embarked on a journey to develop the next-generation YNET that could scale to 4096 high-performance nodes, where we could easily support 10 MBPS  PER PROCESSOR MODULE, linearly scalable up to 4096 processors (vs. 12 MBPS in total on the network). Thus, a 512 node system would support bandwidth up to 10.2 GBPS.

The other breakthrough was creating a network that allowed processors to communicate either point-to-point, multicast, or broadcast. This design leveraged concepts from the Banyan Crossbar Switch, where the network is constructed from the modular switch node building block. In the case of the BYNET, we created a switch node where it was an 8×8 crossbar that can connect any of its eight input ports to its eight output ports simultaneously, arbitrating when conflicts arise. It operates very similar to that of a telephone network.

A sender (one of the many Teradata processors) “dials” a receiver (another processor) by sending a connection request to the network. The request contains an address or “phone number” which is interpreted by the switch nodes. Once the connection goes through, a circuit is established that is held for the duration of the “call”. To support up to 4096 nodes, a folded indirect n-Cube topology was modified. There was no such network topology known at the time like ours, but generally it was in the Banyan class of topologies.

A folded network was chosen to support packaging large networks. Because this was a database machine with large amounts of data being routed between nodes, a circuit switched network (vs. packet switched) was implemented. The BYNET has no single point of failure with redundant paths between every input and output. The BYNET guarantees delivery of every message and ensures that broadcasts get to every target node. So the database isn’t plagued by communication errors or network failures and does not have to pay the price of acknowledgements or other error-detection protocols. This part of the Teradata system was truly disruptive.

Behind me and the team in the above picture is a 256 node Teradata 3700 database system, circa 1992.

This is where my love for data started.

[Note: This BYNET team was responsible for the new Teradata 3700 network architecture, BYNET protocol, BYNET switch node, BYNET I/O processor, BYNET Interface Controller Board, BYNET Type-A, B, and C Boards, BYNET CMA/A and CMA/B Backplanes.]

Posted in Data.

Tagged with , , , , , , , , , , , , , , , .


0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

You must be logged in to post a comment.