Modern office building
Prime Electronics,Printable Electronics,Quantum Electronics Blog - computerwize.net

Discover the neural network architecture with evolutionary algorithms

The brain's evolution has spanned millions of years, from the simple neural structures of ancient worms to the complex systems found in modern humans. For instance, the human brain effortlessly performs tasks like recognizing objects in a scene—whether it's an animal or a building. In contrast, artificial neural networks require extensive expert design and research to accomplish similar tasks, such as object detection in images, identifying genetic variations, or aiding in medical diagnosis. Ideally, we would want an automated system that can generate the optimal network architecture for any given task. To achieve this, one promising approach is using evolutionary algorithms. These methods draw on traditional topological research, enabling large-scale application today. Teams at OpenAI, Uber Labs, Sentient Labs, and DeepMind are actively exploring this area. Google’s Brain team has also been working on Automated Learning (AutoML), aiming to automate the process of designing neural networks. Beyond learning-based methods like reinforcement learning, researchers are exploring whether we can use our computational resources to evolve image classifiers on an unprecedented scale. The goal is to minimize expert involvement while maximizing performance. To address these questions, we published two papers that explore how far evolutionary algorithms can take us. In our 2017 paper titled "Large Evolution of Image Classifiers" presented at ICML, we introduced an evolutionary process using simple building blocks and initial conditions. Starting with a basic model, the algorithm gradually evolved more complex architectures. Remarkably, the resulting classifiers were comparable to manually designed models. This was encouraging because it suggested that even users without deep expertise could benefit from automated design. For example, some users may need better models but lack the time or knowledge to become machine learning experts. A natural next step is to combine manual design with evolution to see if the results are better than either method alone. In our 2018 paper, "Regularization Evolution of Image Classifier Architecture Search," we enhanced the process by introducing more complex building blocks and better initial conditions. We also leveraged Google’s TPUv2 chips to scale up the experiments. This combination of hardware, expert knowledge, and evolution led to state-of-the-art results on benchmarks like CIFAR-10 and ImageNet. An experimental example from our first paper illustrates the process. Each point in the figure represents a neural network trained on CIFAR-10. Initially, the population consisted of 1000 identical simple models. Over time, the algorithm selected higher-performing networks, mutated them, and replaced weaker ones. This mimics biological evolution, where only the fittest survive. Our second paper refined this approach by reducing the search space and making it more manageable. We removed potential errors from the search space, similar to avoiding placing walls on roofs when building a house. By focusing on stable architectural units, we helped the algorithm find better solutions. Zoph et al. introduced a modular structure called "cells," which allowed for flexible design while maintaining a fixed stack. In our work, we applied evolutionary algorithms to this structured search space. Mutations involved randomly reconfiguring cell inputs or replacing operations. While the mutations were simple, the initial conditions were more advanced, starting with pre-designed modules. This approach enabled us to evolve high-quality models more efficiently. To validate our findings, we compared evolution with other methods like reinforcement learning and random search. On CIFAR-10, evolution outperformed reinforcement learning early in the search, which is crucial when computational resources are limited. It also showed greater robustness across different datasets and search spaces. A key feature of our evolutionary algorithm was regularization: instead of eliminating the worst networks, we removed the oldest ones. This improved stability and accuracy over time. By forcing all models to be retrained from scratch, we ensured diversity in the population, leading to more reliable results. Our most advanced model, named AmoebaNets, represents a significant achievement in AutoML. These experiments required massive computational power, using hundreds of GPUs and TPUs. Just as modern computers surpass older machines, we hope these techniques will become commonplace in the future. Our goal is to inspire further research and development in automated neural network design.

Petrochemical Industry Pressure Gauge

Petrochemical Pressure Gauge,Industrial Pressure Measurement,High-Quality Gauges

ZHOUSHAN JIAERLING METER CO.,LTD , https://www.zsjrlmeter.com