OFC 2018 – 400G Inside the Datacenter

When the conversation at OFC turned to short reach client optics, faster speeds were the central focus.  Several companies announced products and gave demonstrations of 400G capable technology. The 100G speeds and production ramp that had been such a hot topic at OFC 2017 had little follow-on during 2018.
Requires Optical Active Insight subscription

OFC 2017 – Inside the Datacenter

100G and the Road to 400G

The transition to 100G network speeds inside the data center is underway at every major hyperscale operator simultaneously, creating major industry bottlenecks. Despite QSFP28 being supply constrained, component and equipment suppliers are also trying to align on the next generation format for 400G operation. Cignal AI’s key takeaways from OFC 2017 with regard to data center optics include:

  • Updates on QSFP28 supply and demand
  • Alternative facts on both sides of the Octal SFP (OSFP) vs. QSFP DD debate
  • Impact of QSFP28 on CFP/CFP2 client demand
  • 400G CFP8 and 200G QSFP observations and outlook

Continue Reading

Investor Call – OFC 2017 Takeaways

Piper Jaffray logo

Andrew Schmitt provided key OFC 2017 takeaways during an investor call hosted by Troy Jensen of Piper Jaffray. Sixty-seven investors participated. Topics of interest included:

  • Impact of Ciena’s Wavelogic AI DSP Licensing
  • Outlook for Coherent Port Shipments in China During 2017
  • Market Outlook for CFP2-ACO/DCO and QSFP28 Markets
  • Potential Vertical Integration by Intel or InPhi
  • Timing of Metro Coherent and ROADM deployment in China
  • Chinese SARFT Capex Guidance
  • Roadmap for 400G and 600G Coherent
  • Observations on Infinera, Ciena, Acacia, Oclaro, and Neophotonics

Following is a summary of the key takeaways, as well as a full transcript of the discussion. Cignal AI clients can also listen to an audio replay.

Continue Reading

LinkedIn Designs Own 100G DC Switch

LinkedIn joined the cool kids and built its own 100G switch based on Broadcom Tomahawk.

Active Insight
To be clear, LinkedIn didn’t do hardware design, they bought a white box and wrote their own software. The only technical reason provided for doing this was “We need better buffer visibility”. This was debated on twitter with the argument that if you need to do this you are doing it wrong.

Aside from greater development control, the real reason LinkedIn went with white box was to save money. The article cites licensing costs per switch – LinkedIn did the math and determined an ongoing fixed R&D expenditure and vertical integration was cheaper. LinkedIn is a company with $750M in annual R&D and can justify building their own tools.]