Photolithography involves manipulating light to exactly etch features onto a surface, and is often used to fabricate computer chips and optical devices like lenses. But tiny deviations in the course of the manufacturing process often cause these devices to fall wanting their designers’ intentions.
To assist close this design-to-manufacturing gap, researchers from MIT and the Chinese University of Hong Kong used machine learning to construct a digital simulator that mimics a selected photolithography manufacturing process. Their technique utilizes real data gathered from the photolithography system, so it will possibly more accurately model how the system would fabricate a design.
The researchers integrate this simulator right into a design framework, together with one other digital simulator that emulates the performance of the fabricated device in downstream tasks, similar to producing images with computational cameras. These connected simulators enable a user to provide an optical device that higher matches its design and reaches the very best task performance.
This system could help scientists and engineers create more accurate and efficient optical devices for applications like mobile cameras, augmented reality, medical imaging, entertainment, and telecommunications. And since the pipeline of learning the digital simulator utilizes real-world data, it will possibly be applied to a wide selection of photolithography systems.
“This concept sounds easy, but the explanations people haven’t tried this before are that real data will be expensive and there aren’t any precedents for the right way to effectively coordinate the software and hardware to construct a high-fidelity dataset,” says Cheng Zheng, a mechanical engineering graduate student who’s co-lead writer of an open-access paper describing the work. “We’ve taken risks and done extensive exploration, for instance, developing and trying characterization tools and data-exploration strategies, to find out a working scheme. The result’s surprisingly good, showing that real data work way more efficiently and precisely than data generated by simulators composed of analytical equations. Although it will possibly be expensive and one can feel clueless initially, it’s value doing.”
Zheng wrote the paper with co-lead writer Guangyuan Zhao, a graduate student on the Chinese University of Hong Kong; and her advisor, Peter T. So, a professor of mechanical engineering and biological engineering at MIT. The research shall be presented on the SIGGRAPH Asia Conference.
Printing with light
Photolithography involves projecting a pattern of sunshine onto a surface, which causes a chemical response that etches features into the substrate. Nevertheless, the fabricated device finally ends up with a rather different pattern due to miniscule deviations in the sunshine’s diffraction and tiny variations within the chemical response.
Because photolithography is complex and hard to model, many existing design approaches depend on equations derived from physics. These general equations give some sense of the fabrication process but can’t capture all deviations specific to a photolithography system. This will cause devices to underperform in the actual world.
For his or her technique, which they call neural lithography, the MIT researchers construct their photolithography simulator using physics-based equations as a base, after which incorporate a neural network trained on real, experimental data from a user’s photolithography system. This neural network, a kind of machine-learning model loosely based on the human brain, learns to compensate for most of the system’s specific deviations.
The researchers gather data for his or her method by generating many designs that cover a wide selection of feature dimensions and shapes, which they fabricate using the photolithography system. They measure the ultimate structures and compare them with design specifications, pairing those data and using them to coach a neural network for his or her digital simulator.
“The performance of learned simulators is dependent upon the info fed in, and data artificially generated from equations can’t cover real-world deviations, which is why it can be crucial to have real-world data,” Zheng says.
Dual simulators
The digital lithography simulator consists of two separate components: an optics model that captures how light is projected on the surface of the device, and a resist model that shows how the photochemical response occurs to provide features on the surface.
In a downstream task, they connect this learned photolithography simulator to a physics-based simulator that predicts how the fabricated device will perform on this task, similar to how a diffractive lens will diffract the sunshine that strikes it.
The user specifies the outcomes they need a device to attain. Then these two simulators work together inside a bigger framework that shows the user the right way to make a design that may reach those performance goals.
“With our simulator, the fabricated object can get the very best possible performance on a downstream task, just like the computational cameras, a promising technology to make future cameras miniaturized and more powerful. We show that, even when you use post-calibration to attempt to get a greater result, it is going to still not be pretty much as good as having our photolithography model within the loop,” Zhao adds.
They tested this system by fabricating a holographic element that generates a butterfly image when light shines on it. Compared to devices designed using other techniques, their holographic element produced a near-perfect butterfly that more closely matched the design. Additionally they produced a multilevel diffraction lens, which had higher image quality than other devices.
In the long run, the researchers want to reinforce their algorithms to model more complicated devices, and in addition test the system using consumer cameras. As well as, they need to expand their approach so it will possibly be used with various kinds of photolithography systems, similar to systems that use deep or extreme ultraviolet light.
This research is supported, partly, by the U.S. National Institutes of Health, Fujikura Limited, and the Hong Kong Innovation and Technology Fund.
The work was carried out, partly, using MIT.nano’s facilities.