{"id":93465,"date":"2023-06-20T21:31:36","date_gmt":"2023-06-20T21:31:36","guid":{"rendered":"https:\/\/www.techrepublic.com\/?p=4120536"},"modified":"2023-06-20T21:31:36","modified_gmt":"2023-06-20T21:31:36","slug":"hpe-discover-2023-greenlake-enters-the-ai-market-with-llm-cloud-service","status":"publish","type":"post","link":"https:\/\/cloudnewshub.com\/?p=93465","title":{"rendered":"HPE Discover 2023: GreenLake enters the AI market with LLM cloud service"},"content":{"rendered":"<div id>\n<p> The new cloud offering should be 100% carbon neutral and will run on the Cray supercomputer, HPE said. <\/p>\n<\/div>\n<div id>\n<figure id=\"attachment_3997512\" aria-describedby=\"caption-attachment-3997512\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-3997512\" src=\"http:\/\/cloudnewshub.com\/wp-content\/uploads\/2023\/06\/hpe-discover-2023-greenlake-enters-the-ai-market-with-llm-cloud-service.jpg\" alt=\"Conceptual technology illustration of artificial intelligence and edge computing.\" width=\"794\" height=\"585\"><figcaption id=\"caption-attachment-3997512\" class=\"wp-caption-text\">Image: kras99\/Adobe Stock<\/figcaption><\/figure>\n<p>The new supercomputing cloud service GreenLake for Large Language Models will be available in late 2023 or early 2024 in the U.S., Hewlett Packard Enterprise announced at HPE Discover on Tuesday. GreenLake for LLMs will allow enterprises to train, tune and deploy large-scale <a href=\"https:\/\/www.techrepublic.com\/article\/chatgpt-vs-google-bard\/\">artificial intelligence <\/a>that is private to each individual business.<\/p>\n<p>GreenLake for LLMs will be available to European customers following the U.S. release, with an anticipated release window in early 2024.<\/p>\n<p>Jump to:<\/p>\n<h2 id=\"startup\">HPE partners with AI software startup Aleph Alpha<\/h2>\n<p>\u201cAI is at an inflection point, and at HPE we are seeing demand from various customers beginning to leverage generative AI,\u201d said Justin Hotard, executive vice president and general manager for HPC &amp; AI Business Group and Hewlett Packard Labs, in a virtual presentation.<\/p>\n<aside class=\"pinbox right\">\n<h3 class=\"heading\">More must-read AI coverage<\/h3>\n<\/aside>\n<p>GreenLake for LLMs runs on an AI-native architecture spanning hundreds or thousands of CPUs or GPUs, depending on the workload. This flexibility within one AI-native architecture offering makes it more efficient than general-purpose cloud options that run multiple workloads in parallel, HPE said. GreenLake for LLMs was created in partnership with Aleph Alpha, a German AI startup, which provided a pre-trained LLM called Luminous. The Luminous LLM can work in English, French, German, Italian and Spanish and can use text and images to make predictions.<\/p>\n<p>The collaboration went both ways, with Aleph Alpha using HPE infrastructure to train Luminous in the first place.<\/p>\n<p>\u201cBy using HPE\u2019s supercomputers and AI software, we efficiently and quickly trained Luminous,\u201d said Jonas Andrulis, founder and CEO of Aleph Alpha, in a press release. \u201cWe are proud to be a launch partner on HPE GreenLake for Large Language Models, and we look forward to expanding our collaboration with HPE to extend Luminous to the cloud and offer it as-a-service to our end customers to fuel new applications for business and research initiatives.\u201d<\/p>\n<p>The initial launch will include a set of open-source and proprietary models for retraining or fine-tuning. In the future, HPE expects to provide AI specialized for tasks related to climate modeling, healthcare, finance, manufacturing and transportation.<\/p>\n<p>For now, GreenLake for LLMs will be part of HPE\u2019s overall AI software stack (<b>Figure A<\/b>), which includes the Luminous model, machine learning development, data management and development programs, and the Cray programming environment.<\/p>\n<p><b>Figure A<\/b><\/p>\n<figure id=\"attachment_4120545\" aria-describedby=\"caption-attachment-4120545\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-4120545\" src=\"http:\/\/cloudnewshub.com\/wp-content\/uploads\/2023\/06\/hpe-discover-2023-greenlake-enters-the-ai-market-with-llm-cloud-service-1.jpg\" alt=\"An illustration of HPE\u2019s AI software stack.\" width=\"1400\" height=\"702\"><figcaption id=\"caption-attachment-4120545\" class=\"wp-caption-text\">An illustration of HPE\u2019s AI software stack. Image: HPE<\/figcaption><\/figure>\n<h2 id=\"cray\">HPE\u2019s Cray XD supercomputers enable enterprise AI performance<\/h2>\n<p>GreenLake for LLM runs on HPE\u2019s Cray XD supercomputers and NVIDIA H100 GPUs. The supercomputer and HPE Cray Programming Environment allow developers to do data analytics, natural language tasks and other work on high-powered computing and <a href=\"https:\/\/www.techrepublic.com\/article\/chatgpt-cheat-sheet\/\">AI applications<\/a> without having to run their own hardware, which can be costly and require expertise specific to supercomputing.<\/p>\n<p>Large-scale enterprise production <a href=\"https:\/\/www.techrepublic.com\/article\/bing-ai-chat-open\/\">for AI<\/a> requires massive performance resources, <a href=\"https:\/\/www.techrepublic.com\/resource-library\/whitepapers\/hiring-kit-artificial-intelligence-architect\/\">skilled people<\/a>, and security and trust, Hotard pointed out during the presentation.<\/p>\n<p><b>SEE: <\/b>NVIDIA offers AI tenancy on its <a href=\"https:\/\/www.techrepublic.com\/article\/nvidia-dgx-ai-supercomputer-computex-announcements\/\">DGX supercomputer<\/a>.<\/p>\n<h2 id=\"renewable\">Getting more power out of renewable energy<\/h2>\n<p>By using a colocation facility, HPE aims to power its supercomputing with 100% renewable energy. HPE is working with a computing center specialist, QScale, in North America on a design built specifically for this purpose.<\/p>\n<p>\u201cIn all of our cloud deployments, the objective is to provide a 100% carbon-neutral offering to our customers,\u201d said Hotard. \u201cOne of the benefits of liquid cooling is you can actually take the wastewater, the heated water, and reuse it. We have that in other supercomputer installations, and we\u2019re leveraging that expertise in this cloud deployment as well.\u201d<\/p>\n<h2 id=\"alternatives\">Alternatives to HPE GreenLake for LLMs<\/h2>\n<p>Other cloud-based services for running LLMs include NVIDIA\u2019s NeMo (which is currently in early access), <a href=\"https:\/\/www.techrepublic.com\/article\/amazon-bedrock-titan-cloud-artificial-intelligence\/\">Amazon Bedrock<\/a>, and Oracle Cloud Infrastructure.<\/p>\n<p>Hotard noted in the presentation that GreenLake for HPE will be a complement to, not a replacement for, large cloud services like AWS and Google Cloud Platform.<\/p>\n<p>\u201cWe can and intend to integrate with the public cloud. We see this as a complimentary offering; we don\u2019t see this as a competitor,\u201d he said.<\/p>\n<p> <!-- default newsletter at the end --> <\/div>\n","protected":false},"excerpt":{"rendered":"<p>The new cloud offering should be 100% carbon neutral and will run on the Cray supercomputer, HPE said. Image: kras99\/Adobe Stock The new supercomputing cloud service GreenLake for Large Language Models will be available in late 2023 or early 2024 in the U.S., Hewlett Packard Enterprise announced at HPE Discover on Tuesday. GreenLake for LLMs [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":93466,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[77,40,783],"tags":[],"class_list":["post-93465","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-cloud","category-cloudsync"],"_links":{"self":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts\/93465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=93465"}],"version-history":[{"count":0,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts\/93465\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/media\/93466"}],"wp:attachment":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=93465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=93465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=93465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}