System Memory Power & Thermal Management in Platforms Built on Intel® Centrino® Duo Mobile Tech
By: Intel Corporation View more from Intel Corporation >>
Download this next:
Optimizing mobile games using Arm Mobile Studio
By:
Type: Video
Optimizing games for mobile devices can be a challenging experience, as developers need to find the appropriate balance of CPU and GPU performance given the constraints of the limited memory bandwidth and thermal environment of the mobile platform. This webinar will provide an overview of the features and capabilities of the Arm Streamline Profiler and the Arm Graphics Analyzer tools from the Arm Mobile Suite, and provide a step-by-step walk through how they can be used together to profile application workloads. • Gain deep insight for Unity applications running on Arm Mali- based devices • Learn how to trace code impact on GPU & CPU performance • Understand how to optimize unreal creation for more devices
These are also closely related to: "System Memory Power & Thermal Management in Platforms Built on Intel® Centrino® Duo Mobile Tech"
-
K8s Limits and Requests: Monitoring and Troubleshooting by example
By:
Type: Replay
Are your Kubernetes applications not performing well enough? Is your infrastructure oversized? Kubernetes limits and requests dictate the resources available to your applications, so when they aren’t set correctly your cluster suffers from CPU throttling and Out Of Memory Kills. Oversizing your infrastructure is an easy, but expensive, solution — there must be a better way. Prometheus metrics give you insight on your Kubernetes limits and requests, helping detect and troubleshoot common issues. Learn how to maximize the availability and performance of your Kubernetes infrastructure with proven examples.
-
Royal Holloway: Rowhammer – From DRAM faults to escalating privileges
By: TechTarget ComputerWeekly.com
Type: Research Content
A side effect in dynamic random-access memory (DRAM) that occurs due to increased density, creating a challenge to prevent cell charges from interacting with adjacent cells, has evolved to the vulnerability called Rowhammer. Discover how it is used to exploit memory management techniques in different environments, inject errors in cryptographic protocols and perform privilege escalation attacks, and the counter-measures to help protect your organisation from attack.
Find more content like what you just read:
-
Inline Memory Encryption Enabling Data in Use Protection for Confidential Computing
By:
Type: Video
Join us as Ajay Kapoor discusses how Inline Memory Encryption can protect data moving between processors and attached memory. In this presentation, he’ll discuss: - Introduction to Inline Memory Encryption and its benefits - Techniques for integrating memory encryption into hardware designs - Examples demonstrating effective use of inline memory encryption
-
VMware Platform Services deploying Tanzu with Intel Hardware
By:
Type: Replay
VMware and Intel collaboration demonstrates how tiered memory with Intel® OptaneTM persistent memory (PMem) enables up to a 66% reduction in the number of servers while increasing memory per server tenfold. VMware Tanzu is a popular container platform—and VMware itself runs its own containerized applications on Tanzu. Like its customers that use Tanzu, VMware Platform Services (VPS) faces data center challenges—tight IT budgets, memory-hungry modern workloads, and outdated hardware. VMware recently collaborated with Intel to determine the viability of upgrading hardware to consolidate servers and use tiered memory to provide Tanzu’s containerized workloads with more memory than the existing legacy hardware could support. Following best practices developed by Intel for right-sizing tiered memory systems, VMware data center architects monitored real-world, production, containerized workloads running on Tanzu to understand average memory and CPU utilization. The memory metrics gathered for the legacy server environment indicated that VMware’s Tanzu deployment was a good fit for a tiered memory system with Intel Optane persistent memory (PMem). In a tiered memory configuration, Intel Optane PMem serves as main capacity memory and a small amount of 3,200 MT/s DRAM serves as a cache. Tiered memory enabled VMware to replace 27 legacy blade servers with nine newer 1U servers equipped with 3rd Gen Intel® Xeon® Scalable processors. As a result, per-server memory capacity increased from 384 GB to 4 TB, while lowering memory costs by up to 33%. In summary, VMware’s deployment of tiered memory for their production Tanzu environment proves that Intel Optane PMem enables massive server consolidation—reducing the number of servers by as much as 66%—and provides vast amounts of memory for VMware Tanzu containers at an affordable $/GB.
-
CXL 1.1 vs. CXL 2.0 – What’s the difference?
By:
Type: Talk
Compute Express Link™ (CXL™) is a high-speed interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between host processor and devices such as accelerators, memory buffers, and smart I/O devices. The CXL 1.1 specification introduced and defined the CXL I/O protocol, memory protocol, and coherency interface. The CXL 2.0 specification adds support for switching for fan-out to connect to more devices; memory pooling for increased memory utilization efficiency and providing memory capacity on demand; and support for persistent memory. This webinar will share a high-level overview of CXL 1.1, and the enhancements made in CXL 2.0 focusing on switching, memory pooling, Single Logical Devices (SLD) vs. Multiple Logical Devices (MLD), and fabric management. The presentation will also explore managed hot-plug, memory QoS telemetry, speculative reads, and security enhancements.
-
[Ep.1] The Game Changer: An Introduction to Tiered Memory
By:
Type: Talk
With over 90 percent of enterprises navigating a pandemic-accelerated Digital Transformation journey, the need to become more efficient, innovative and data-centric has never been more apparent. And with there being a glut of advice and strategies on how best to do this, it’s not always easy to cut through the noise and see which route may be the most effective for your business and workloads. One path to reaching this state of efficiency and innovation is through memory. Now a vital tool in minimizing costs and supercharging Digital Transformation initiatives, efficient memory capacity is an essential part of any enterprise’s modernization strategy. For decades, traditional DRAM was the primary memory solution in the market, and despite the incredible performance of DRAM, it has inherent capacity limitations and can be extremely expensive as you scale in capacity. The market is ready for a new way to approach memory challenges and better manage their data, and the answer to these common challenges is a concept called Tiered Memory. But what exactly is Tiered Memory, and what advantages can it offer? In episode 1 of Intel’s ‘Tiered Memory is Here’ series, John Burke, CTO of Nemertes and Drew Peterson, Global Memory & Storage Sales Manager at Intel, are coming together to answer just this, and to shed light on how it can offer an alternative to DRAM, deliver a unique combination of affordable large capacity and improve memory bound workloads. Join John and Drew in this 45-minute webinar to hear: - What Tired Memory is - and what it isn’t - How Tiered Memory can drive innovation and uncover new operational efficiencies - Common challenges that utilizing a Tiered Memory system can present, and how to overcome them - How Tiered Memory can lead to optimized business insights and analytics - And more
-
[Ep.1] The Game Changer: An Introduction to Tiered Memory
By:
Type: Talk
With over 90 percent of enterprises navigating a pandemic-accelerated Digital Transformation journey, the need to become more efficient, innovative and data-centric has never been more apparent. And with there being a glut of advice and strategies on how best to do this, it’s not always easy to cut through the noise and see which route may be the most effective for your business and workloads. One path to reaching this state of efficiency and innovation is through memory. Now a vital tool in minimizing costs and supercharging Digital Transformation initiatives, efficient memory capacity is an essential part of any enterprise’s modernization strategy. For decades, traditional DRAM was the primary memory solution in the market, and despite the incredible performance of DRAM, it has inherent capacity limitations and can be extremely expensive as you scale in capacity. The market is ready for a new way to approach memory challenges and better manage their data, and the answer to these common challenges is a concept called Tiered Memory. But what exactly is Tiered Memory, and what advantages can it offer? In episode 1 of Intel’s ‘Tiered Memory is Here’ series, John Burke, CTO of Nemertes and Drew Peterson, Global Memory & Storage Sales Manager at Intel, are coming together to answer just this, and to shed light on how it can offer an alternative to DRAM, deliver a unique combination of affordable large capacity and improve memory bound workloads. Join John and Drew in this 45-minute webinar to hear: - What Tired Memory is - and what it isn’t - How Tiered Memory can drive innovation and uncover new operational efficiencies - Common challenges that utilizing a Tiered Memory system can present, and how to overcome them - How Tiered Memory can lead to optimized business insights and analytics - And more
-
Making Memories at HyperScale with CXL®
By:
Type: Talk
CXL® (Compute Express Link®) enables the addition of a new tier of memory to the memory hierarchy using type-3 devices. There are many first-generation CXL memory expansion devices in the market that will allow this capability. However, due to additional controller and board costs and associated power consumption, much of the value proposition of memory tiering is diluted. This webinar will outline a framework to enable CXL devices to be deployed at scale and allow the expansion of platform memory capacity incrementally and cost-effectively. The webinar will also explore how this framework will be a fundamental enabler for more ambitious CXL-enabled memory architectures in the future.
-
What’s in a Name? Memory Semantics and Data Movement with CXL™ and SDXI
By:
Type: Talk
Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization. SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review. Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning. In this webcast, we will: • Introduce SDXI and CXL • Discuss data movement needs in a CXL ecosystem • Cover SDXI advantages in a CXL interconnect
-
What’s in a Name? Memory Semantics and Data Movement with CXL™ and SDXI
By:
Type: Talk
Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization. SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review. Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning. In this webcast, we will: • Introduce SDXI and CXL • Discuss data movement needs in a CXL ecosystem • Cover SDXI advantages in a CXL interconnect
-
What’s in a Name? Memory Semantics and Data Movement with CXL™ and SDXI
By:
Type: Talk
Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization. SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review. Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning. In this webcast, we will: • Introduce SDXI and CXL • Discuss data movement needs in a CXL ecosystem • Cover SDXI advantages in a CXL interconnect
-
What’s in a Name? Memory Semantics and Data Movement with CXL™ and SDXI
By:
Type: Talk
Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization. SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review. Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning. In this webcast, we will: • Introduce SDXI and CXL • Discuss data movement needs in a CXL ecosystem • Cover SDXI advantages in a CXL interconnect
-
What’s in a Name? Memory Semantics and Data Movement with CXL™ and SDXI
By:
Type: Talk
Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization. SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review. Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning. In this webcast, we will: • Introduce SDXI and CXL • Discuss data movement needs in a CXL ecosystem • Cover SDXI advantages in a CXL interconnect
-
Using Quantum Memories in Entanglement-based Networks
By:
Type: Talk
Quantum memories are an integral part of quantum networks, enabling the secure, efficient, and long-distance transmission of quantum information in a variety of applications. This webinar provides an exploration of quantum memories, their applications, and practical aspects of integrating them into entanglement-based network architectures. Join Michael Wood, Aliro Quantum CMO, for this presentation examining rapidly-developing quantum memory technologies. Business leaders and technologists will gain a foundational understanding of the roles quantum memories play in entanglement-based network applications. In this webinar you’ll learn: - What a quantum memory is and what it does - The different types of quantum memories, and which are best suited for use in entanglement-based networks - Applications enabled by quantum memories - and their role in an entanglement-based network - The future of quantum memory development and their use in entanglement-based networks Organizations who are preparing their businesses, security posture, and connectivity for the quantum age are investigating how to plan, design, and implement quantum-secure cybersecurity solutions today. This webinar will help prepare you to navigate the quantum future.
-
Introducing High-Performance Data Center Solutions with AMD EPYC
By: Supermicro
Type: White Paper
As AI transforms enterprise computing, organizations are modernizing their data centers to keep up. This white paper breaks down how to level up your data center and improve performance with help from Supermicro H14 servers. Read on to access 13 pages of insights.
-
Ask the Experts: DDR5 Memory Interface Chips
By:
Type: Video
In this episode of Ask the Experts, we sat down with Rambus memory expert John Eble to learn what’s new with DDR5 server memory modules (RDIMMs). Major topics discussed include: The chips in DDR5 memory’s new RDIMM architecture - The importance of memory module PMICs - New Rambus DDR5 server PMIC products - Timing and enablement of DDR5 7200 server platforms
-
Persistent Memory Trends - A Panel Discussion
By:
Type: Talk
Panelists include Gary Kotzur, CTO Marvell; Andy Mills, Sr. Director Advanced Product Development, SMART Modular; Pekon Gupta, Business Strategy Specialist, Intel, and David McIntyre, Director Product Planning and Business Enablement, Samsung Corporation Where do companies see the industry going with regard to persistent memory? With the improvement of SSD and DRAM I/O over CXL, the overlap of CXL and NVMe, high density persistent memory, and memory-semantic SSDs, there is a lot to talk about! Our panel of experts will widen the lens on persistent memory, take a system level approach, and see how the persistent memory landscape is being redefined.
-
Compute Express Link™ (CXL™): Supporting Persistent Memory
By:
Type: Talk
Compute Express Link™ (CXL™) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. The CXL 2.0 specification introduces support for switching, memory pooling, and persistent memory – all while preserving industry investments by supporting full backward compatibility. An increasing number of applications — ranging from databases to AI workloads — are being enhanced to take advantage of persistent memory. This webinar will explore how the CXL specification has evolved to support persistent memory devices in a way that preserves the established software model. This webinar will cover enhancements to the CXL protocol, error handling, and standardized configuration interface, enabling innovative designs that are based on a variety of non-volatile media and form factors.
-
Join the Memory Tiering Revolution with VMware and Intel
By:
Type: Replay
Rapidly changing world dynamics and exponential data growth have increased the demand for IT services. During a period of unprecedented economic uncertainty, IT organizations are deploying more infrastructure but within limited budgets. Intel and VMware are addressing this IT paradox. Learn how architectural approaches like memory tiering can increase memory capacity and help reduce TCO. Hear how VMware and Intel have partnered to make infrastructure ownership and management more efficient with memory tiering and monitoring. - Storage tiering has been in place for years, now do it with memory. - Increase memory capacity while reducing cost without sacrificing performance - Maintain visibility using the VMware Memory Management and Remediation Tool (vMMR). Speakers: - Simon Todd, Technical Solution Specialist, Intel - Arvind Jagannath, Cloud Platform Product Management, VMware - Sudhir Balasubramanian, Senior Staff Solution Architect & Global Oracle Lead, VMware
-
Accelerating Generative AI – Options for Conquering the Dataflow Bottlenecks
By:
Type: Talk
Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute, or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. This webinar, with a broad cross-section of industry veterans, provides insight into the following: • Defining the Gen AI dataflow bottlenecks • Tools and methods for identifying acceleration options • Matchmaking the right xPU solution to the target Gen AI workload(s) • Optimizing the network to support acceleration options • Moving data closer to processing, or processing closer to data • The role of the software stack in determining Gen AI performance
-
Accelerating Generative AI – Options for Conquering the Dataflow Bottlenecks
By:
Type: Talk
Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute, or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. This webinar, with a broad cross-section of industry veterans, provides insight into the following: • Defining the Gen AI dataflow bottlenecks • Tools and methods for identifying acceleration options • Matchmaking the right xPU solution to the target Gen AI workload(s) • Optimizing the network to support acceleration options • Moving data closer to processing, or processing closer to data • The role of the software stack in determining Gen AI performance
-
Accelerating Generative AI – Options for Conquering the Dataflow Bottlenecks
By:
Type: Talk
Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute, or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. This webinar, with a broad cross-section of industry veterans, provides insight into the following: • Defining the Gen AI dataflow bottlenecks • Tools and methods for identifying acceleration options • Matchmaking the right xPU solution to the target Gen AI workload(s) • Optimizing the network to support acceleration options • Moving data closer to processing, or processing closer to data • The role of the software stack in determining Gen AI performance
-
Accelerating Generative AI – Options for Conquering the Dataflow Bottlenecks
By:
Type: Talk
Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute, or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. This webinar, with a broad cross-section of industry veterans, provides insight into the following: • Defining the Gen AI dataflow bottlenecks • Tools and methods for identifying acceleration options • Matchmaking the right xPU solution to the target Gen AI workload(s) • Optimizing the network to support acceleration options • Moving data closer to processing, or processing closer to data • The role of the software stack in determining Gen AI performance
-
Accelerating Generative AI – Options for Conquering the Dataflow Bottlenecks
By:
Type: Talk
Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute, or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. This webinar, with a broad cross-section of industry veterans, provides insight into the following: • Defining the Gen AI dataflow bottlenecks • Tools and methods for identifying acceleration options • Matchmaking the right xPU solution to the target Gen AI workload(s) • Optimizing the network to support acceleration options • Moving data closer to processing, or processing closer to data • The role of the software stack in determining Gen AI performance
-
Persistent Memory , CXL, and Memory Tiering - Past , Present & Future
By:
Type: Talk
Join Jim Handy of Objective Analysis as he moderates a panel featuring Andy Rudoff and Bhushan Chitlur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware as they discuss Persistent Memory , Compute Express Link™ (CXL™), and Memory Tiering; and how the eco-system is working together to provide solutions for memory tiering using CXL for customer use cases.
-
Persistent Memory , CXL, and Memory Tiering - Past , Present & Future
By:
Type: Talk
Join Jim Handy of Objective Analysis as he moderates a panel featuring Andy Rudoff and Bhushan Chitlur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware as they discuss Persistent Memory , Compute Express Link™ (CXL™), and Memory Tiering; and how the eco-system is working together to provide solutions for memory tiering using CXL for customer use cases.
-
Persistent Memory , CXL, and Memory Tiering - Past , Present & Future
By:
Type: Talk
Join Jim Handy of Objective Analysis as he moderates a panel featuring Andy Rudoff and Bhushan Chitlur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware as they discuss Persistent Memory , Compute Express Link™ (CXL™), and Memory Tiering; and how the eco-system is working together to provide solutions for memory tiering using CXL for customer use cases.
-
Persistent Memory , CXL, and Memory Tiering - Past , Present & Future
By:
Type: Talk
Join Jim Handy of Objective Analysis as he moderates a panel featuring Andy Rudoff and Bhushan Chitlur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware as they discuss Persistent Memory , Compute Express Link™ (CXL™), and Memory Tiering; and how the eco-system is working together to provide solutions for memory tiering using CXL for customer use cases.
-
Persistent Memory , CXL, and Memory Tiering - Past , Present & Future
By:
Type: Talk
Join Jim Handy of Objective Analysis as he moderates a panel featuring Andy Rudoff and Bhushan Chitlur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware as they discuss Persistent Memory , Compute Express Link™ (CXL™), and Memory Tiering; and how the eco-system is working together to provide solutions for memory tiering using CXL for customer use cases.
-
Storage performance: From fundamentals to bleeding edge
By: TechTarget ComputerWeekly.com
Type: eGuide
In this guide we look at LUN provision and management, RAID and flash storage, plus tuning for analytics workloads and key emerging technologies that can give your organisation a competitive advantage in mission-critical operations, such as storage-class memory, NVMe flash and persistent memory.
-
Ask the Experts: HBM3E Memory Interface IP
By:
Type: Video
In this episode of Ask the Experts, we discuss HBM3E memory with Nidish Kamath, Director of Product Management for Memory Interface IP at Rambus, and Frank Ferro, Group Director of Memory and Storage IP at Cadence. Topics discussed include: • The role HBM plays in today’s computing landscape • Reasons why we’ve seen such a rapid evolution of the HBM specification in recent years • Characteristics of HBM, and HBM3E specifically, that make it particularly suitable for AI training • Challenges on the physical layer and memory controller sides to meet the higher performance targets of HBM3E • How Cadence and Rambus work together to instantiate for the customer a complete HBM3E memory subsystem • Where people can learn more about the Cadence and Rambus IP solutions for HBM3E
-
Increasing Memory Utilization and Reducing Total Memory Cost Using CXL
By:
Type: Talk
CXL’s advanced memory expansion and fabric management capabilities can be used to increase system scalability and flexibility across multiple compute domains, enabling resource sharing for higher performance, reduced software stack complexity, and lower overall datacenter memory cost. The fabric enhancements and memory expansion features included in CXL 3.0 deliver new levels of composability required by the large models used in HPC and AI in the modern datacenter. In this webinar, expert representatives from CXL Consortium member companies who are implementing the specification will explore the CXL 3.0 features, new use case enablement, and ROI examples when implementing CXL attached memory.
-
Using VR to Craft Immersive Memories (VR for Good)
By:
Type: Talk
What you will learn: How it is possible to use VR to create impactful memories and bring memories to life. With their years of experience and creative expertise, Limbert Fabian and Jen Cadic will present two of their outstanding productions, St. Jude Hall of Heros and War Remains. Limbert and Jen were key to the development of those projects and have great insight to how those projects became leading examples of how to use VR to Craft Immersive Memories. As part of the discussion, you will have the opportunity to ask questions and view video footage of the projects.
-
Using VR to Craft Immersive Memories (VR for Good)
By:
Type: Talk
What you will learn: How it is possible to use VR to create impactful memories and bring memories to life. With their years of experience and creative expertise, Limbert Fabian and Jen Cadic will present two of their outstanding productions, St. Jude Hall of Heros and War Remains. Limbert and Jen were key to the development of those projects and have great insight to how those projects became leading examples of how to use VR to Craft Immersive Memories. As part of the discussion, you will have the opportunity to ask questions and view video footage of the projects.
-
Using VR to Craft Immersive Memories (VR for Good)
By:
Type: Talk
What you will learn: How it is possible to use VR to create impactful memories and bring memories to life. With their years of experience and creative expertise, Limbert Fabian and Jen Cadic will present two of their outstanding productions, St. Jude Hall of Heros and War Remains. Limbert and Jen were key to the development of those projects and have great insight to how those projects became leading examples of how to use VR to Craft Immersive Memories. As part of the discussion, you will have the opportunity to ask questions and view video footage of the projects.
-
An Overview of the Compute Express Link™ (CXL™) 2.0 ECN
By:
Type: Talk
In November 2020, the CXL Consortium released the CXL 2.0 specification which introduces support for switching, memory pooling, and support for persistent memory – all while preserving industry investments by supporting full backward compatibility. Based on member feedback, CXL 2.0 ECNs made significant improvements to the specifications in the areas of device management, RAS, Security, memory interleaving and others. This webinar will review the key CXL 2.0 specification ECNs and the new usages they enable.
-
Systems Management Demo Series – OpenManage Enterprise Power Manager
By:
Type: Video
Lori Matthews, Product Manager of OpenManage Enterprise Power Manager shows customers how this solution: • Automates power capping with policies • Alerts power and thermal events • Prevents outages from power and thermal issues • Reduces power consumption • Provides comprehensive reporting to: • Identify where to move a server into production faster and safer • Provide chargeback data • Map the power and thermal health of the servers
-
Introducing the CXL 3.1 Specification
By:
Type: Talk
The CXL 3.1 Specification introduces enhancements to fabric capability and manager API definition for PBR switch, inter-host communication using Global Integrated Memory (GIM), Trusted-Execution-Environment Security Protocol (TSP), and memory expander improvements. These enhancements will enable composable and disaggregated systems to keep up with the demand for high-performance computational workloads. This webinar will introduce the CXL 3.1 specification and explore the new features including: • CXL Fabric improvements and extensions • Trusted-Execution-Environment Security Protocol (TSP) • Memory expander improvements
-
Smart Cooling for PowerEdge Servers: Quick Guide
By: Dell Technologies and Intel
Type: Product Overview
PowerEdge servers are designed with Smart Cooling which uses state-of-the-art thermal technologies with intelligent control systems to ensure optimal cooling and sustained system performance. Check out this overview for a quick guide to Smart Cooling.
-
CXL is coming: Are you ready?
By:
Type: Talk
Compute express link (CXL) technology is rushing onto the scene with a goal of changing the very nature of computer architecture. Not only does this interface support memory disaggregation, persistent memory, and nonuniform memory architectures (NUMA), but it also is being used to standardize the interface between chiplets, which should improve processor cost/performance while creating more diversity in processor types. Members of the storage and computing communities must understand CXL if they want to keep up with tomorrow’s most productive computing configurations. This presentation will begin with a brief CXL tutorial, to explain what it is and why it is needed, and then moves to use cases and the various configurations that the new protocol supports. Attendees will learn: - Why CXL came into being, and what memory models it supports. - The problem of “stranded” memory, and how CXL addresses it. - How CXL benefits the adoption of storage class memory. - CXL’s use in the UCIe chiplet interface. - What storage admins must prepare for as CXL rolls out. About the speaker: Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon. A frequent presenter at trade shows, Mr. Handy is highly respected for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com.
-
Optimize Performance, Reduce Latency, Increase Memory Capacity: Dell EMC VxRail
By:
Type: Video
In today’s rapidly changing economic landscape, innovative companies need a modern, data-centric infrastructure that can deliver actionable insights in real-time. Transforming infrastructures to meet today’s expanding business needs requires IT organizations to be agile, scale quickly and address business-critical workloads, all while ensuring a simplified and seamless deployment. Dell EMC VxRail now features the full range of Intel® Optane™ technologies for the data center -- 2nd Gen Intel® Xeon® Scalable processors, Intel® Optane™ persistent memory and Intel® Optane™ SSDs. Now Hyperconverged Infrastructure, with higher performance and lower latency, can increase memory capacity at a lower cost than traditional memory, optimize flexibility and scalability, and lower Total Cost of Ownership (TCO). In this webinar you will hear from experts from both VxRail and Intel on how you can work faster, broaden use cases and leverage your full data potential with Dell EMC VxRail and Intel Optane Persistent Memory Technologies. Presented by: Flavio Fomin, Director of SW Engineering | Dell Technologies Nelson Fonseca, Sr. Principal Engineer | Dell Technologies Drew Peterson, Persistent Memory Specialist | Intel Data Platforms Group Chris Murphy, Global Account Manager, Intel Corporation
-
Modern Uses of Thermal Imaging and Computer Vision Technologies
By:
Type: Talk
Welcome back to Dell Technologies Innovation Webinar Series. In this conversation we will explore and focus on the “Modern Uses of Thermal Imaging and Computer Vision Technologies”. The next generation of safety and security solutions such as thermal imaging and computer vision are being used in novel ways in today’s world, and we believe this is only the beginning.
-
Modern Uses of Thermal Imaging and Computer Vision Technologies
By:
Type: Talk
Welcome back to Dell Technologies Innovation Webinar Series. In this conversation we will explore and focus on the “Modern Uses of Thermal Imaging and Computer Vision Technologies”. The next generation of safety and security solutions such as thermal imaging and computer vision are being used in novel ways in today’s world, and we believe this is only the beginning.
-
The Neuroscience of Memorable Content
By:
Type: Talk
Research shows that people forget 90% of your content after 48 hours. So how can they act on your message if they only remember a tenth of it? How do you even know which tenth they’ll remember? Drawing on neuroscience insights, Dr. Carmen Simon shares practical techniques to help you create memorable and actionable messages. Join this session to learn the latest neuroscience research, based on both cognition and emotion, to understand why memory is mandatory in business and how to create memorable and actionable content. After all, what is the use of memory if people don't act on it? Join this webinar to learn how to: *Create consistent mental models to impact precision memory *Appeal to the brain’s two pathways to attention and memorability *Offer content that people like and want (liking and wanting are two different networks in the brain)
-
Ask the Experts: DDR5 Client Chipset
By:
Type: Video
In this episode of Ask the Experts, we discuss DDR5 for client systems with John Eble, VP of Product Marketing, Memory Interface Chips at Rambus. Topics discussed include: • The need for advanced chipsets for DDR5 client DIMMs • The role of the DDR5 Client Clock Driver (CKD) and its use cases • The AI applications driving the need for greater memory performance in client systems
-
Critical apps on older vSphere? Now What?
By:
Type: Talk
My company is running business-critical apps on an older vSphere version. Now What??? The expert panel, with representatives from VMware and Intel, moderated by Deb Howard, discusses the implications of the lifecycle for Sphere v6 and v7. What are the options for business-critical applications on v6 or v7? We will compare the features and benefits of v6, v7, and v8 from a memory management and TCO perspective. We will dive deeper into the performance and cost benefits when running v6, v7, or v8 with Intel Xeon processors, Intel Optane PMem persistence memory, and Intel Optane SSDs deployments. What are the implications of waiting to upgrade? Is waiting for CXL-based memory solutions a viable path forward?
-
Best Practices on Implementing Thermal Vision Solutions
By:
Type: Talk
Welcome back to Dell Technologies Innovation Webinar Series. In this conversation we will have Wayne Arvidson interviewing Ken Mills CEO of Intellisite as he shares some best practices around implementing thermal vision solutions. The next generation of safety and security solutions such as thermal imaging and computer vision are being used in different ways in today’s world, and we believe this is only the beginning.
-
Best Practices on Implementing Thermal Vision Solutions
By:
Type: Talk
Welcome back to Dell Technologies Innovation Webinar Series. In this conversation we will have Wayne Arvidson interviewing Ken Mills CEO of Intellisite as he shares some best practices around implementing thermal vision solutions. The next generation of safety and security solutions such as thermal imaging and computer vision are being used in different ways in today’s world, and we believe this is only the beginning.
-
Balance the Power with Intel Optane Persistent Memory
By:
Type: Video
Organization’s rely on their most critical business workloads to differentiate their business and drive desired outcomes. As such, we understand there are many paths towards modernizing the data estate, and data management is not a one size fits all strategy. Listen in as experts discuss the latest features of Dell Technologies PowerEdge Server capabilities to bring faster insights, improve TCO, operate more nimbly, and improve system performance. Learn how to take advantage of Intel Optane Persistent Memory to: Save More – More Cost Effective Than Traditional Memory, Do More – Support More VMs, VDI Sessions or Databases Per Server, and Go Faster – Eliminate Memory and Storage Performance Bottlenecks. In addition, hear a high-level overview of why a storage platform makes the most sense for Optane-accelerated workloads - SAP HANA on PowerMax, Splunk on PowerScale and SQL Server on PowerStore.