News 2018-03-14T17:18:57+00:00
Loading...

Esgyn in the news

2107, 2016

Rohit Jain, CTO of Esgyn Presenting at Data Day Seattle, July 23 2016

July 21st, 2016|

Attending Data Day Seattle? Esgyn will be in full force to meet and discuss your Big Data challenges and how we can help you at Data Day Seattle. We are also excited that Rohit Jain, CTO of Esgyn Corporation will be speaking at the conference. Send us an email at marketing@esgyn.com to schedule a time with Rohit to discuss your specific situation.   In Search of Database Nirvana – The Challenges of Delivering Hybrid Transaction/Analytical Processing Rohit Jain, Co-founder & [...]

1705, 2016

Learn About Apache Trafodion at HBaseCon 2016

May 17th, 2016|

Attend Esgyn CTO Rohit Jain's HBaseCon 2016 session next Tuesday at 4:30pm to learn how Apache Trafodion has what it takes to run hybrid transaction/analytic processing (HTAP) workloads on Hadoop.

1902, 2016

Discover EsgynDB at Strata

February 19th, 2016|

See demos of EsgynDB powered by Apache Trafodion in booth 625 at Strata + Hadoop World, March 29-31 in San Jose, CA. Attend Esgyn CTO Rohit Jain's presentation In Search of Database Nirvana: The Challenges of Delivering HTAP.

812, 2015

EsgynDB 2.0 Release

December 8th, 2015|

Esgyn just released EsgynDB 2.0. This new version provides multiple Data Center support (active /active), a key differentiation to bring Transactions on Hadoop into production environments. A complete list of the new features available in this release is available here. At the same time, Esgyn issued a press release to share the first Customers success stories with ADP, Webroot and Kuwo. The press release is available here.  

1610, 2015

Apache Trafodion DBMS article on Bloor

October 16th, 2015|

Bloor Research writes about Trafodion: This new database project, Trafodion (which is Welsh for "transactions"), is offering SQL on Hadoop (with full ACID properties, which are sometimes compromised in new and "affordable" databases). This promises, amongst other things, to let you run both operational transaction processing workloads and "big data" analytics against the same Hadoop datastore environment.