Computer processors – Hardware Specs http://hardware-specs.net/ Fri, 14 Jan 2022 22:52:08 +0000 en-US hourly 1 https://wordpress.org/?v=5.8 https://hardware-specs.net/wp-content/uploads/2021/06/cropped-icon-32x32.png Computer processors – Hardware Specs http://hardware-specs.net/ 32 32 TSMC’s quarterly profit rose to 6 billion in the fourth quarter of 2021 https://hardware-specs.net/tsmcs-quarterly-profit-rose-to-6-billion-in-the-fourth-quarter-of-2021/ Fri, 14 Jan 2022 20:05:41 +0000 https://hardware-specs.net/tsmcs-quarterly-profit-rose-to-6-billion-in-the-fourth-quarter-of-2021/ Taiwan Semiconductor Manufacturing Co., the world’s largest contract manufacturer of computer chips, reported that its quarterly profit rose 16.4% from a year earlier to US$6 billion (Going through PA). TSMC quarterly earnings in 2021 In the last quarter (three months) of 2021, its revenue increased by 21.2% to $15.8 billion and $6 billion in quarterly […]]]>

Taiwan Semiconductor Manufacturing Co., the world’s largest contract manufacturer of computer chips, reported that its quarterly profit rose 16.4% from a year earlier to US$6 billion (Going through PA).

TSMC quarterly earnings in 2021

In the last quarter (three months) of 2021, its revenue increased by 21.2% to $15.8 billion and $6 billion in quarterly profit for TSMC. Headquartered in Hsinchu, Taiwan, TSMC manufactures processors for a number of companies such as Apple. Recent company announcements include:

  • Plans to invest US$100 billion over the next three years in manufacturing and research and development
  • Plans to build its first chip factory in Japan. TSMC and Sony Corp. said they would jointly invest $7 billion in the facility
  • Plans for a second US production site located in Arizona

TSMC has a semiconductor wafer fab in Camas, Washington, and design centers in San Jose, California, and Austin, Texas. The company will spend $12 billion on its factory in Arizona with the goal of producing 20,000 silicon wafers per month at the 5-nanometer scale. This level of chip is the smallest, fastest and most power efficient manufactured today.

The company expects its capital expenditure to reach US$44 billion in 2020, a 32% increase from US$30 billion spent in 2021. TSMC aims to face competition from Samsung , another chipmaker, as well as Intel, which announcement [PDF] its own chip foundry in 2021. Intel Foundry Services would be a stand-alone, fully vertical company.

TSMC has built a manufacturing plant in southern Taiwan for 3-nanometer chips, as well as a new plant for the production of 5-nanometer chips.

]]>
A key feature of Apple’s M1 chips could be coming to PCs https://hardware-specs.net/a-key-feature-of-apples-m1-chips-could-be-coming-to-pcs/ Thu, 13 Jan 2022 02:31:00 +0000 https://hardware-specs.net/a-key-feature-of-apples-m1-chips-could-be-coming-to-pcs/ Researchers have created a new version of memory, dubbed “UltraRAM,” that can store files and data without power and still has the speed of RAM. A research team from Lancaster University has developed a new memory system, called ‘UltraRAM,’ which combines the data storage elements of solid-state storage with the ultra-fast capabilities of modern RAM. […]]]>

Researchers have created a new version of memory, dubbed “UltraRAM,” that can store files and data without power and still has the speed of RAM.

A research team from Lancaster University has developed a new memory system, called ‘UltraRAM,’ which combines the data storage elements of solid-state storage with the ultra-fast capabilities of modern RAM. This is a new version of the functionality demonstrated by Apple with its M1 line of processors, which uses a unified memory system that can share RAM between different processing units to create significant performance gains. With newer Apple Macs, the CPU and GPU can share system memory based on the needs of programs running on the computer at the time. In this new technology, there is potential for pluggable RAM to perform the functions of an SSD faster than current storage solutions on the market.

VIDEO OF THE DAY

How RAM and SSDs work is key to understanding why UltraRAM’s unified memory matters. Random access memory, often abbreviated as RAM, is a type of volatile storage for computers and mobile devices. Based on computing action, RAM can quickly find and store key information, and just as quickly discard it to make room for more important data. On the other hand, solid state storage – commonly found in computer SSDs – is non-volatile and stores data for long periods of time, albeit at slower speeds than RAM. While RAM can achieve incredible speeds only limited by thermal output, SSDs trade that potential speed for long-term storage.


Related: Ultra-fast LPDDR5X RAM is designed for AI, 5G and the Metaverse

Research published by Lancaster University shows a new type of RAM that can store files even without power, while maintaining the breakthrough speeds of modern RAM. Science uses different semiconductors – otherwise found in LEDs and lasers – to retain information for long periods of time without access to electricity. This is precisely why UltraRAM was invented as the next unified memory solution. If UltraRAM can store long-term data at the same speeds as current-gen RAM, which is getting small enough to enter the gaming phone market, it could replace the need for SSDs in the future.


Differences between UltraRAM and Apple Unified Memory


Apple M1 chip statistics chart

Apple’s M1 line of chips contains a different type of unified memory than the UltraRAM concept proposed by Lancaster University. In the M1 processor, the RAM modules are integrated into the system on chip (SoC). The SoC contains the CPU, GPU, and neural engine on a single chip, allowing the computer to access memory between any of these three processors as needed. UltraRAM technology takes the idea of ​​unified memory and positions it as an SSD replacement. Instead of sharing memory between processing units, UltraRAM can use memory as long-term storage at fast speeds.

UltraRAM, although still far from reaching the mainstream market, could lead to real improvements in everyday computer use. Even though the memory can only store a limited amount of data due to its high speed, it can store crucial data that is frequently accessed. A potential use case would be to store the operating system on UltraRAM. Loading a computer’s operating system on something as fast as modern RAM could reduce boot times in a way that even the most casual computer user would notice.


Next: Nvidia DLDSR is about to make your PC games look better than ever

Source: Lancaster University

NASA JPL Travel Poster Atomic Clock

Scientists Say Alien Encounter Could Be Like a Long Distance Relationship


About the Author

]]>
AWS Announces General Availability of Amazon EC2 Hpc6a Instances https://hardware-specs.net/aws-announces-general-availability-of-amazon-ec2-hpc6a-instances/ Tue, 11 Jan 2022 01:00:00 +0000 https://hardware-specs.net/aws-announces-general-availability-of-amazon-ec2-hpc6a-instances/ SEATTLE – (COMMERCIAL THREAD) – Today, Amazon Web Services, Inc. (AWS), a company of Amazon.com, Inc. (NASDAQ: AMZN), announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) Hpc6a instances, a new instance type that is specifically designed for tightly coupled High Performance Computing (HPC) workloads. Hpc6a instances, powered by 3rd Generation AMD EPYC processors, […]]]>

SEATTLE – (COMMERCIAL THREAD) – Today, Amazon Web Services, Inc. (AWS), a company of Amazon.com, Inc. (NASDAQ: AMZN), announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) Hpc6a instances, a new instance type that is specifically designed for tightly coupled High Performance Computing (HPC) workloads. Hpc6a instances, powered by 3rd Generation AMD EPYC processors, extend AWS’s HPC compute options portfolio and offer up to 65% better pricing performance compared to similar compute-optimized Amazon EC2 instances as customers use HPC workloads today. Hpc6a instances make it even more cost effective for customers to scale HPC clusters on AWS to run their most compute-intensive workloads such as genomics, computational fluid dynamics, weather forecasting, molecular dynamics , computational chemistry, financial risk modeling, computer-aided engineering, and seismic imaging. Hpc6a instances are available on demand through a low cost usage model and to use with no upfront commitment. To get started with Hpc6a instances, visit aws.amazon.com/ec2/instance-types/hpc6.

Organizations in many industries rely on HPC to solve their most complex academic, scientific, and business problems. However, the efficient use of HPC is expensive because it requires the ability to process large amounts of data, which requires an abundance of computing power, fast memory and storage, and low latency networking within organizations. HPC clusters. Some organizations build an on-premises infrastructure to run HPC workloads, but this involves expensive upfront investments, long procurement cycles, ongoing overhead management to monitor hardware and keep software up to date, and flexibility. limited when infrastructure inevitably becomes obsolete and needs to be upgraded. Customers in many industries run their HPC workloads in the cloud to take advantage of the superior security, scalability, and elasticity it offers. Engineers, researchers, and scientists rely on AWS to run their largest and most complex HPC workloads and choose Amazon EC2 instances with enhanced networking (e.g. C5n, R5n, M5n, and C6gn ) to scale tightly coupled HPC workloads that require high levels of instance intercom with thousands of interrelated tasks. While the performance of these instances is sufficient for most HPC use cases, as workloads evolve further to solve increasingly difficult problems, customers are looking to maximize pricing performance when they are running applications. HPC workloads of up to tens of thousands of servers on AWS.

The new Hpc6a instances are uniquely designed to provide the best value for running large-scale HPC workloads in the cloud. Hpc6a instances offer up to 65% better pricing performance for HPC workloads to perform complex calculations over a range of cluster sizes, up to tens of thousands of cores. Hpc6a instances are enabled with Elastic Fabric Adapter (EFA) —a network interface for Amazon EC2 instances — by default. With EFA networking, customers benefit from low latency, low jitter, and up to 100 Gbps of EFA network bandwidth to increase operational efficiency and achieve faster results from workloads that rely on inter-agency communications. Hpc6a instances are powered by 3rd generation AMD EPYC processors which operate at frequencies up to 3.6 GHz and provide 384 GB of memory. By using Hpc6a instances, customers can more cost-effectively solve their most important and difficult academic, scientific, and business problems with HPC, and enjoy the benefits of AWS with superior pricing performance.

“By constantly innovating and creating new Amazon EC2 instances specifically designed for virtually any type of workload, AWS customers have realized tremendous value for money for some of today’s most critical applications. ‘hui. While high performance computing has helped solve some of the most difficult problems in science, engineering, and business, running HPC workloads efficiently can be prohibitive for many organizations, ”said David Brown, vice-president. president of Amazon EC2 at AWS. “Designed for HPC workloads, Hpc6a instances now help customers achieve up to 65% better price performance for their HPC clusters at virtually any scale, so they can focus on resolution issues that matter most to them without the cost barriers that exist. today.”

“We are excited to continue our momentum with AWS and provide their customers with this powerful new instance for high performance computing workloads,” said Dan McNamara, senior vice president and general manager, Server Business at AMD. “AMD EPYC processors help customers of all sizes solve some of their most important and complex problems. From small universities to companies via large research centers, the Hpc6a instances optimized by 3rd Gen AMD EPYC processors are opening the world of powerful HPC performance with cloud scalability to more customers globally.

Customers can use Hpc6a instances with AWS ParallelCluster (an open source cluster management tool) to provision Hpc6a instances with other types of instances, giving customers the flexibility to run different types of optimized workloads. for different instances within the same HPC cluster. Hpc6a instances benefit from the AWS Nitro system, a set of building blocks that offload many traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and increased security while reducing overhead costs. virtualization. Hpc6a instances are available for purchase as On-Demand or Reserved Instances, or with cost savings plans. Hpc6a instances are available in the US East (Ohio) and AWS GovCloud (US West), and will soon be available in other AWS Regions.

Maxar partners with innovative companies and more than 50 governments to monitor global change, provide broadband communications, and advance space operations with space infrastructure and Earth intelligence capabilities. “Amazon EC2 Hpc6a Instances are another exciting announcement from AWS that allows Maxar to continue to meet and exceed our customers’ requirements for large compute workflows, whether to accelerate research and workload operations. digital weather forecasting job or to create the world’s best, most up-to-date and accurate digital twin models with our Maxar Precision3D suite of products, ”said Dan Nord, SVP and Product Manager at Maxar Technologies. “Hpc6a’s AMD EPYC processors combined with EFA networking capability gives us a 60% performance improvement over alternatives, while also being more cost effective. This allows Maxar to strategically choose from the suite of AWS HPC cluster configurations that we have developed to best meet the needs of our customers while maximizing flexibility and resiliency.

DTN’s global network of weather stations deliver hyper-local, accurate, real-time weather information to give organizations actionable insights. “Our collaboration with AWS allows us to better serve our customers with high-resolution weather forecasting systems that power analytics engines,” said Lars Ewe, CTO at DTN. “We are very happy to see the pricing performance of Hpc6a instances, and expect this to be our go-to Amazon EC2 instance choice for HPC workloads in the future. ”

TotalCAE has over 20 years of experience with HPC for Computer Aided Engineering (CAE). TotalCAE helps eliminate IT headaches by professionally managing the HPC engineering environment and customer engineering applications so that they can focus on engineering, not IT. “The TotalCAE platform enables CAE departments to easily adopt the agility and flexibility of AWS with just a few clicks for hundreds of engineering applications such as Ansys Fluent, Siemens Simcenter STAR-CCM + and Dassault Systèmes Abaqus” , said Rod Mach, President of TotalCAE. . “As an AWS HPC Competence Partner, we help customers run their CAE workloads in the cloud. With HPC6a instances, we’ve seen up to a 30% performance increase for digital fluid dynamics workloads at a lower cost, enabling TotalCAE to deliver best-in-class performance and scalability to its customers. in the cloud. ”

About Amazon Web Services

For more than 15 years, Amazon Web Services has been the most comprehensive and widely adopted cloud offering in the world. AWS has continuously expanded its services to support virtually any cloud workload, and it now has over 200 comprehensive services for compute, storage, databases, networking, l ” analysis, machine learning and artificial intelligence (AI), Internet of things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), application development, deployment and management and media from 84 Availability Zones (AZ) in 26 geographic regions, with plans announced for 24 additional Availability Zones and eight additional AWS Regions in Australia, Canada, India, Israel, New Zealand, Spain, Switzerland and the United Arab Emirates. Millions of customers, including the fastest growing startups, largest enterprises, and major government agencies, trust AWS to power their infrastructure, become more agile, and lower their costs. To learn more about AWS, visit aws.amazon.com.

About amazon

Amazon is guided by four principles: customer obsession rather than focus on competitors, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be the world’s most customer-centric company, the world’s best employer, and the world’s safest workplace. Customer Reviews, One-Click Purchases, Personalized Recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire Tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out Technology, Amazon Studios and The Climate Pawnbrokers are some of the things that Amazon has come up with. For more information, visit amazon.com/about and follow @AmazonNews.

]]>
Making Next Generation Quantum Computers Even More Powerful https://hardware-specs.net/making-next-generation-quantum-computers-even-more-powerful/ Sat, 08 Jan 2022 23:23:57 +0000 https://hardware-specs.net/making-next-generation-quantum-computers-even-more-powerful/ Three resonators operating at different frequencies read a 3×3 matrix of quantum dots. Credit: © Harald Homulle 2022 EPFL EPFL engineers have developed a method to read multiple qubits – the smallest unit of quantum data – at the same time. Their method paves the way for a new generation of even more powerful quantum […]]]>

Three resonators operating at different frequencies read a 3×3 matrix of quantum dots. Credit: © Harald Homulle 2022 EPFL

EPFL engineers have developed a method to read multiple qubits – the smallest unit of quantum data – at the same time. Their method paves the way for a new generation of even more powerful quantum computers.

“IBM and Google currently have the most powerful quantum computers in the world,” explains Professor Edoardo Charbon, head of the Advanced Quantum Architecture Laboratory (AQUA Lab) at EPFL’s Faculty of Engineering. “IBM has just unveiled a 127 qubit machine, while Google’s is 53 qubit machine.” The possibilities to make quantum computers even faster are limited, however, due to an upper limit on the number of qubits. But a team of engineers led by Charbon, in collaboration with British researchers, has just developed a promising method to overcome this technological barrier. Their approach can read qubits more efficiently, which means more of them can be packed into quantum processors. Their findings appear in Electronic Nature.

Biochemistry and cryptography

Quantum computers don’t work like the computers we’re used to. Instead of having a separate processor and memory chip, the two are combined into a single unit called a qubit. These computers use quantum properties like superposition and entanglement to perform complex calculations that ordinary computers could never do in a reasonable amount of time. Potential applications of quantum computers include biochemistry, cryptography, etc. The machines used by research groups now have around ten qubits.

“Our challenge now is to interconnect more qubits in quantum processors – we’re talking hundreds, if not thousands – in order to increase the processing power of computers,” says Charbon.

The number of qubits is currently limited by the fact that there is not yet a technology that can read all qubits quickly. “To further complicate matters, qubits operate at temperatures close to absolute zero, or -273.15oC, ”says Charbon. “This makes them even more difficult to read and control. What engineers usually do is run machines at room temperature and control each qubit individually.

“It’s a real breakthrough”

Andrea Ruffino, a doctoral student at the Charbon laboratory, has developed a method for simultaneously and efficiently reading nine qubits. In addition, its approach could be extended to larger qubit matrices. “Our method is based on the use of the time and frequency domains,” he explains. “The basic idea is to reduce the number of connections by making three qubits work with a single link. “

EPFL doesn’t have a quantum computer, but that didn’t stop Ruffino. He found a way to emulate qubits and perform experiments under almost the same conditions as a quantum computer. “I incorporated quantum dots, which are nanoscale semiconductor particles, into a transistor. It gave me something that works the same as qubits, ”says Ruffino.

He is the first AQUA Lab doctoral student to study this subject for his thesis. “Andrea has shown that his method works with integrated circuits on ordinary computer chips and at temperatures approaching those of a qubit,” Charbon explains. “This is a real breakthrough that could lead to large integrated qubit matrix systems with the necessary electronics. The two types of technologies could work together in a simple, efficient and repeatable way.

Reference: “A cryo-CMOS chip that integrates silicon quantum dots and multiplexed dispersive reading electronics” by Andrea Ruffino, Tsung-Yeh Yang, John Michniewicz, Yatao Peng, Edoardo Charbon and Miguel Fernando Gonzalez-Zalba, December 27, 2021 , Electronic Nature.
DOI: 10.1038 / s41928-021-00687-6


Source link

]]>
Corsair launches its mini PC ONE i300 with a Core i9 12900K and up to an RTX 3080 Ti https://hardware-specs.net/corsair-launches-its-mini-pc-one-i300-with-a-core-i9-12900k-and-up-to-an-rtx-3080-ti/ Wed, 05 Jan 2022 06:47:19 +0000 https://hardware-specs.net/corsair-launches-its-mini-pc-one-i300-with-a-core-i9-12900k-and-up-to-an-rtx-3080-ti/ Corsair announced its latest line of ONE i300 compact desktops at CES 2022. Systems include the latest 12th generation processors and DDR5 memory, among other specs you wouldn’t expect in a pre-built machine from a capacity of only 12 liters. The i300 features a Core i9 12900K and up to 64GB of Corsair Vengeance DDR5 […]]]>

Corsair announced its latest line of ONE i300 compact desktops at CES 2022. Systems include the latest 12th generation processors and DDR5 memory, among other specs you wouldn’t expect in a pre-built machine from a capacity of only 12 liters.

The i300 features a Core i9 12900K and up to 64GB of Corsair Vengeance DDR5 memory. Add to that an RTX 3080 or an RTX 3080 Ti and it is clear that this is no run-of-the-mill PC. It’s a serious gaming machine. The i300 is equipped with Thunderbolt 4 and it can drive four 4K displays simultaneously, so it’s not just useful for gaming.


Source link

]]>
2021: review of a year of Apple hardware https://hardware-specs.net/2021-review-of-a-year-of-apple-hardware/ Fri, 31 Dec 2021 21:57:39 +0000 https://hardware-specs.net/2021-review-of-a-year-of-apple-hardware/ Like every year, Apple launched a number of new or updated products in 2021. At the same time, a few offers were dropped. Let’s take a look at Apple hardware in 2021, new and retired. New and Updated Apple Products During the year 2021, the Cupertino based company continued its transition away from Intel processors […]]]>

Like every year, Apple launched a number of new or updated products in 2021. At the same time, a few offers were dropped. Let’s take a look at Apple hardware in 2021, new and retired.

New and Updated Apple Products

During the year 2021, the Cupertino based company continued its transition away from Intel processors in its computers. We saw the launch of the M1 iMac, following the introduction in 2020 of the M1 MacBook and Mac mini products. Later in the year, Apple expanded with the 14-inch and 16-inch MacBook Pros, offering M1 Pro and M1 Max processors. We also have a new iPad Pro M1.


Apple also chose 2021 to launch its AirTag item finder, along with an updated 4K Apple TV with a new Siri remote. MagSafe compatible iPhones received an Apple-branded battery and Cupertino unveiled the iPhone 13 product line.

Colors iPhone 13 Pro
Color options for iPhone 13 Pro and iPhone 13 Pro Max

Other product updates in 2021 included the iPad mini 6, the ninth generation iPad, and the Apple Watch Series 7. Cupertino also unveiled a new generation of AirPods, Beats Fit Pro, and newer options. color for its HomePod mini speaker.

Apple hardware discontinued in 2021

Throughout the year, a number of hardware products also disappeared. Apple announced in March 2021 the discontinuation of the full-size HomePod. Apple still provides software updates, service, and support for the speakers, but wants to focus its efforts on the smaller HomePod mini.

Likewise, Cupertino gave the ax to the iMac Pro. Claiming that the 27-inch iMac was the preferred choice of most iMac users, Apple stopped making the pro model of its all-in-one desktop PC. The 21.5-inch iMac is also gone, perhaps bad news for educational institutions.

Interestingly enough, Apple also ditched the Space Gray versions of its Magic Keyboard, Magic Trackpad, and Magic Mouse 2 on a stand-alone basis. However, consumers can still get the accessories in silver.

The launch of the iPhone 13 models meant it was time to stop manufacturing an older smartphone option. The iPhone XR is no longer, leaving you to choose between the second-generation iPhone SE, iPhone 11, iPhone 12 and 12 mini, or iPhone 13.

Other discontinued products, after being refreshed, included the iPhone 12 Pro, iPhone 12 Pro Max, Apple Watch Series 6, fifth-generation iPad mini, eighth-generation iPad, and Apple TV. First-generation 4K.

Overall a decent year for Apple hardware

We’ve received a number of great additions to the Apple hardware lineup in 2021. The evolving M1 system architecture shows all that is possible for upcoming Macs, and we’re excited to see where the iPhone lineup is going. . It’s unfortunate that Apple ditched the bigger HomePod, however.


Source link

]]>
Kyoto University supercomputer system ‘accidentally’ erases 77TB of data after suffering from technical error https://hardware-specs.net/kyoto-university-supercomputer-system-accidentally-erases-77tb-of-data-after-suffering-from-technical-error/ Fri, 31 Dec 2021 04:00:00 +0000 https://hardware-specs.net/kyoto-university-supercomputer-system-accidentally-erases-77tb-of-data-after-suffering-from-technical-error/ A supercomputer system based at Japan’s largest research center, Kyoto University, suffered from a technical error. The issue resulted in a massive loss of 77 terabytes of data that was being routinely backed up at that time. Millions of deleted files would come from several research organizations. Supercomputer system technical problem (Photo: STR / JIJI […]]]>

A supercomputer system based at Japan’s largest research center, Kyoto University, suffered from a technical error. The issue resulted in a massive loss of 77 terabytes of data that was being routinely backed up at that time.

Millions of deleted files would come from several research organizations.

Supercomputer system technical problem

(Photo: STR / JIJI PRESS / AFP via Getty Images)
TOPSHOT – This photo taken on June 16, 2020 shows Japan’s Fugaku supercomputer at the Riken Center for Computational Science in Kobe, Hyogo Prefecture. – The Fugaku supercomputer, built with government support and used in the fight against the COVID-19 coronavirus, is now ranked as the fastest in the world, its developers announced on June 22, 2020.

According to a recent report by Gizmodo on Friday, December 31, the incident allegedly took place between December 14 and 16. The unexpected error erased millions of files that could grow to around 34 million.

The report adds that these files come from 14 different research institutes that rely on Kyoto University’s supercomputers. In addition, the Japanese academy manages the Data Direct ExaScaler storage as well as the Hewlett Packard (HP) Cray computer system which are fully utilized for research studies.

At the time of writing, the university has yet to identify the exact nature of the deleted files or the root cause of the technical error. The school announced earlier that files from at least four groups will no longer be recovered.

Related article: AMD Fits Singapore’s Fastest Supercomputer With EPYC Processors

How much does supercomputing research cost

Like Beeping computer As pointed out in his report, compute-intensive research is not something that should be easily dismissed. Depending on the severity of the research, the hourly operation alone could cost hundreds of dollars.

After the unexpected incident that Kyoto experienced this month, the university has released further information about the loss of supercomputer storage data.

This is what the school posted on their page (translated into English).

Dear users of high-performance computing services

Today, a bug in the storage system backup program caused a crash in which some files in / LARGE0 were lost. We have stopped addressing the issue, but we may have lost close to 100TB of files and are investigating the extent of the impact.

We will contact the persons concerned individually.

We apologize for the inconvenience to all users.

The most interesting thing about supercomputing is that it goes beyond the usual work done on normal computing. It uses very complex mathematical calculations to perform computer processes in the system.

Additionally, experts are exploring its uses by incorporating supercomputers in several fields, including physics, climate change, and other areas of research.

In a December 25 report from Interesting engineering, the Jean Zay supercomputer from France became the first HPC to have a photonic coprocessor. Instead of using electric current, the machine relies on light to process all information.

Predictive supercomputers

Earlier this year, Tech Times reported that astronomers used the ATERU II supercomputer to simulate 4,000 instances of the universe. Researchers who operate from the National Astronomical Observatory of Japan (NAOJ) have mapped models to see the primitive state of the universe.

Another report from the same tech publication in February discussed how the world’s fastest computer could predict tsunamis in real time. Thanks to the application of artificial intelligence (AI), experts could finally create 20,000 possible scenarios for natural disasters.

Since Japan is in the Pacific Ring of Fire, experts must take action in this situation. That way, they could warn people in advance if there is an impending tidal wave that will hit a specific area.

Also read: Tesla: Fifth Most Powerful Supercomputer In The World To Run Autopilot And Autonomous AI: Is It Special?

This article is the property of Tech Times

Written by Joseph Henry

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.


Source link

]]>
Food processors, agribusiness and farms encouraged to tighten cybersecurity measures https://hardware-specs.net/food-processors-agribusiness-and-farms-encouraged-to-tighten-cybersecurity-measures/ Thu, 23 Dec 2021 23:25:29 +0000 https://hardware-specs.net/food-processors-agribusiness-and-farms-encouraged-to-tighten-cybersecurity-measures/ A cybersecurity expert says the agriculture industry needs to be aware of the possibility of ransomware attacks and take appropriate steps to ensure that it is not a target for hackers. There was a wake-up call in early June when the world’s largest meat processing company was hit by a ransomware attack. Ransomware is a […]]]>

A cybersecurity expert says the agriculture industry needs to be aware of the possibility of ransomware attacks and take appropriate steps to ensure that it is not a target for hackers.

There was a wake-up call in early June when the world’s largest meat processing company was hit by a ransomware attack. Ransomware is a type of cyber attack that infects your device, keeping your information hostage until you pay a fee. Brazilian company JBS had to shut down operations for a day at several plants across North America, including the large beef processing plant in High River, Alta.

JBS decided to pay a Russian hacking group $ 13.3 million to obtain the encryption code.

David Mason is the director of corporate security at cybersecurity artificial intelligence firm, Darktrace. He says it’s very important to have up-to-date antivirus software and firewall protection.

“There is always something known as digitization going on,” he said. “Often this is not a malicious scan. It’s just big companies sending messages, but it’s a red flag for people to realize that there is a possibility that someone is coming knocking on my door and I am not aware .

Mason says hackers prefer targets where they can walk through the front door, which is why he recommends changing passwords often.

“Use strong passwords for everything. Do not use the same password because if it is compromised, everything is compromised. I know it’s a pain in the neck, but do it and you’ll be fine, ”he said.

You can hear Mason’s full interview below.


Source link

]]>
Artificial intelligence simulates microprocessor performance in real time https://hardware-specs.net/artificial-intelligence-simulates-microprocessor-performance-in-real-time/ Wed, 22 Dec 2021 02:49:41 +0000 https://hardware-specs.net/artificial-intelligence-simulates-microprocessor-performance-in-real-time/ This approach is detailed in an article presented at MICRO-54: the 54th IEEE / ACM International Symposium on MicroArchitecture.Micro-54 is one of the top conferences in the field of computer architecture and was selected as the best publication of the conference. “This is a problem that needs to be studied in depth and has traditionally […]]]>
This approach is detailed in an article presented at MICRO-54: the 54th IEEE / ACM International Symposium on MicroArchitecture.Micro-54 is one of the top conferences in the field of computer architecture and was selected as the best publication of the conference.
“This is a problem that needs to be studied in depth and has traditionally relied on additional circuitry to solve it,” said Zhiyao Xie, lead author of the article and doctoral student in the lab of Yiran Chen, professor of electricity and computer engineering at Duke. “But our approach works directly on background microprocessors, which opens up a lot of new opportunities. I think that’s why people are so excited about it. ”

In modern computer processors, the computation cycle is 3 trillion times per second. Tracking the energy consumed for such a fast conversion is important to maintaining the performance and efficiency of the entire chip. If a processor consumes too much power, it can overheat and cause damage. Sudden fluctuations in power demand can lead to internal electromagnetic complications that slow down the entire processor.
By implementing software that can predict and prevent these unwanted extremes, computer engineers can protect their hardware and improve its performance. But such a plan would come at a cost. Keeping up to date with modern microprocessors often requires valuable additional hardware and computing power.
“APOLLO comes close to an ideal power estimation algorithm that is both precise and fast and can easily be integrated into a low power cost processing core,” said Xie. “Since it can be used in any type of processing unit, it could become a common component in future chip designs.”
The secret to Apollo’s power is artificial intelligence. The algorithm developed by Xie and Chen uses artificial intelligence to identify and select the 100 signals most closely related to power consumption among the millions of signals from the processor. The company then built a power model from those 100 signals and monitored them to predict the performance of the entire chip in real time.
Because this learning process is autonomous and data-driven, it can be implemented on most computer processor architectures, even those that have not yet been invented. While it doesn’t need any human designer expertise to do its job, the algorithm can help human designers do theirs.
“Once the AI ​​picks out 100 signals, you can look at the algorithm and see what they are,” Xie said. in relation to power consumption and performance. “
This work is part of a collaboration with Arm Research, a computer engineering research organization that aims to analyze disruptions affecting industry and create advanced solutions that can be deployed years in advance.APOLLO has been validated on some of the best processors today with the help of ARM Research. But the algorithm needs to be thoroughly tested and evaluated on more platforms before it can be adopted by commercial computer manufacturers, the researchers said.
“Arm Research has partnered with and secured funding from some of the best-known companies in the industry, such as Intel and IBM, and forecasting power consumption is one of their top priorities,” said Chen. “Programs like this provide our students with an opportunity to work with these industry leaders, and these results make them want to continue working with and hire Duke graduates.”
This study was conducted by the Arm Research High-Performance AClass CPU Research research program and was partially supported by the National Science Foundation (NSF-2106828, NSF-2112562) and Semiconductor Research Corporation (SRC).


Source link

]]>
Measuring the power of a quantum computer just got faster and more accurate https://hardware-specs.net/measuring-the-power-of-a-quantum-computer-just-got-faster-and-more-accurate/ Mon, 20 Dec 2021 16:00:02 +0000 https://hardware-specs.net/measuring-the-power-of-a-quantum-computer-just-got-faster-and-more-accurate/ Sandia National Laboratories has designed a faster, more accurate style of testing for quantum computers, like the one shown here. Credit: Bret Latter, Sandia National Laboratories What does a quantum computer have in common with a premier choice in sports? Both have attracted a lot of attention from talent scouts. Quantum computers, experimental machines capable […]]]>

Sandia National Laboratories has designed a faster, more accurate style of testing for quantum computers, like the one shown here. Credit: Bret Latter, Sandia National Laboratories

What does a quantum computer have in common with a premier choice in sports? Both have attracted a lot of attention from talent scouts. Quantum computers, experimental machines capable of performing certain tasks faster than supercomputers, are constantly being evaluated, as are young athletes, for their potential to one day become revolutionary technology.

Now, scout scientists have their first tool to rank a potential technology’s ability to perform realistic tasks, revealing its true potential and limits.

A new type of benchmark test, designed by Sandia National Laboratories, predicts the likelihood that a quantum processor will run a specific program without error.

The so-called mirror circuit method, published today in Physics of nature, is faster and more accurate than conventional tests, helping scientists develop the technologies most likely to lead to the world’s first practical quantum computer, which could dramatically accelerate research in medicine, chemistry, physics, agriculture and national security .

Until now, scientists have measured performance on random operations obstacle courses.

But according to the new research, conventional benchmark tests underestimate many quantum computational errors. This can lead to unrealistic expectations about the power or usefulness of a quantum machine. Mirror circuits offer a more precise test method, according to the document.

A mirror circuit is a computer routine that performs a set of calculations and then the other way around.

“It is a common practice in the quantum computing community to only use random and disordered programs to measure performance, and our results show that this is not the right thing to do,” said the computer scientist Timothy Proctor, a member of Sandia’s Quantum Performance Laboratory who participated in the research.

The new test method also saves time, which will help researchers evaluate increasingly sophisticated machines. Most benchmark tests check for errors by running the same set of instructions on a quantum machine and a conventional computer. If there are no errors, the results should match.

However, because quantum computers perform some calculations much faster than conventional computers, researchers can wait a long time for the end of normal computers.

With a mirror circuit, however, the output should always be the same as the input or an intentional change. So instead of waiting, scientists can immediately check the result of the quantum computer.

New method reveals flaws in conventional performance reviews

Proctor and his colleagues have found that randomized tests miss or underestimate the compound effects of errors. When an error is compounded, it gets worse as the program executes, like a wide receiver taking the wrong path, further and further away from where they are supposed to be at. as the game continues.

By mimicking functional programs, Sandia found that the end results often had larger deviations than randomized tests showed.

“Our benchmarking experiments revealed that the performance of current quantum computers is much more variable on structured programs” than previously known, said Proctor.

The mirror circuit method also gives scientists a better understanding of how to improve current quantum computers.

“By applying our method to today’s quantum computers, we were able to learn a lot about the errors that these particular devices experience, as different types of errors affect different programs in different ways,” said Proctor. “This is the first time that these effects have been observed in multi-qubit processors. Our method is the first tool to probe these large-scale error effects.”


A breakthrough in error mitigation for quantum computers


More information:
Timothy Proctor, Measuring the Capabilities of Quantum Computers, Physics of nature (2021). DOI: 10.1038 / s41567-021-01409-7. www.nature.com/articles/s41567-021-01409-7

Provided by Sandia National Laboratories


Quote: Measuring the power of a quantum computer has become faster and more precise (2021, December 20) retrieved December 20, 2021 from https://phys.org/news/2021-12-quantum-power-faster- accurate.html

This document is subject to copyright. Other than fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.


Source link

]]>