Peter Naftaliev Lecture Series on Learning on How to Go from 2D to 3D with Machine Learning

Peter Naftaliev is an artificial intelligence (AI) researcher and consultant who is working in making 2D into 3D. Besides work for his company Abelians, he also publishes breakthrough research at his blog here and uses his 15 years of machine learning experience to teach the technology. If you’re interested in Project Management for Machine Learning, or were looking for a Deep Learning Dictionary, or want to write your own patent, Naftaliev is a wealth of information.

Naftaliev first came to our attention when he published work showing us how to take a 2D image and turn it into a possibly 3D-printed file. Creating 3D content, moreover 3D printable 3D content is difficult. CAD is still too complex for most people and 3D scanning works, but is finicky. Everyone can either draw or take photos that lead to digital 2D content, however. If we could easily take 2D data and make it 3D printable, one could much more easily make their own 3D-printed products, let consumers mass-customize things or make custom-fit things like shoe soles.

So, Naftaliev’s work on the cutting edge of AI and 3D printing is, to me, of potential crucial importance to the future of 3D printing. Eventually raw computing power, improving cameras and better software could help us get to a stage where all of our phones are 3D scanners that can be used to create 3D content easily. Until then, and also subsequently, AI and machine learning could let us take much more content and make it 3D.

Machine learning and AI, however, are kind of like a magical sauce that is supposed to make everything better for everyone all the time. I remember when 3D printing was seen in the same light. I personally tried to be a realistic, enlightening, but not dazzlingly optimistic guide for people through these hype times. For AI and the intersection of machine learning and 3D printing, Naftaliev is this person for me. Not for me alone, however.

Corona-bored Naftaliev posted on Reddit. He decided to do a Zoom call about deep learning. He had a previously prepared lecture which he couldn’t give because all of his lectures and conferences where he would speak were Corona-cancelled. So, he made a Reddit post about his online lecture. Just based on that one Reddit post, Naftaliev had 350 people sign up for his lecture. Now, he has a Reddit community that is all about “2D, 3D and AI. Video, image, 3D modeling, depth maps, neural networks.” You can subscribe to the newsletter here or the meetup here to be kept abreast of goings-on.

Enthused by this, he continued with talks on:

“image processing, AI, 3d modeling, technological advancements . But, more importantly, it is a step to try to democratize access to information to anyone around the world. Academic research and papers can be very hard to figure out even for people who are working in the subject. Reading just one paper and truly understanding what is going on can take several good days of work and sometimes requires access to people who have knowledge in the field. And, what’s more, a lot of the research does not come with an open source where you can try to test things out yourself (it is extremely hard to replicate the code and results of a paper, if not impossible because of access to training data and computational resources). The research that does come with code many times is still hard to figure out, sometimes there are bugs or things in the code that do not align with the research. I want to help make all of this more accessible to people everywhere.”

He feels that “if I or anyone else has put the time and effort to understand some new research that is out, why not share it with others.” He does each live lecture twice, once for the east side of the planet, once for the west. He then offers these lectures for download. The next lecture deals with generating art using neural networks.

In the future he hopes “to get the authors of the most important researches in our field to come and present their papers, code and the latest advancements – live, online, for anyone who is interested in learning more.” The lectures are clear and super interesting but not necessarily for casual viewing, so paying attention helps. Naftaliev means for them to be for,

“People anywhere in the world with technical orientation who are interested in machine learning, or are already full-fledged practitionersresearchers who want to expand their understanding of sub-topics in this sphere. We are also touching the boundary of digital art, so people from the digital arts that want to see what the latest technological research in the field can do and how they can use it for their art.”

In terms of background,

“Mathematical and programming background is a plus. We do explain basic concepts in machine learning if we see that the audience is not fully familiar with them. Every participant who signs in to listen to a lecture fills out a small bio about himself so we know how much intro material we need to explain and how deep we can dive.

Naftaliev hopes that you can learn,

“Which papers and open source findings are interesting and relevant, the current state of the art results and how to replicate them, current technological limitations in industry and academia, expend your horizons about what is possible to achieve with AI in everything to do with image processing, 3D modeling, signal processing and more. I am also experimenting with allowing people to get to know each other and form connections around the world by sharing these common interests.”

You can find the YouTube channel here. Below you can see how you can go from 2D to 3D using neural nets.

I think that this is fascinating and, with Naftaliev’s help, you can be transported to the cutting edge of making 2D 3D and understand more about machine learning. I really think that this is an emerging frontier for our industry and am very grateful that Naftaliev will be giving this series of lectures. Subscribe here.

A lecture by a guest Dr. Eyal Gruss, Fake Anything the Art of Deep Learning is here.

The post Peter Naftaliev Lecture Series on Learning on How to Go from 2D to 3D with Machine Learning appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

Free Automated Software to Design 3D Printable Cranial Implants

Repairing skull defects with custom cranial implants, otherwise known as a cranioplasty, is expensive and takes a great deal of time, as the the existing process often results in bottlenecks due to long wait times for the implant to be designed, manufactured, and shipped. While 3D printing the implants can help with these issues, a team of researchers from the Graz University of Technology and Medical University of Graz in Austria published a paper, “An Online Platform for Automatic Skull Defect Restoration and Cranial Implant Design,” about an automated system for cranial implant design they’ve devised that can do even better.

“Due to the high requirements for cranial implant design, such as the professional experience required and the commercial software, cranioplasty can result in a costly operation for the health care system,” the researchers wrote. “On top, the current process is a cause of additional suffering for the patient, since a minimum of two surgical operations are involved: the craniotomy, during which the bony structure is removed, and the cranioplasty, during which the defect is restored using the designed implant. When the cranial implant is externally designed by a third-party manufacturer, this process can take several days [1], leaving the patient with an incomplete skull.”

In the case study they cited above, the researchers explained that a professional design center in the UK designed the cranial implant for a patient who lived in Spain. The CT scans had to be transferred from the hospital in Spain to the UK design center, and then a separate UK company 3D printed the titanium implant, which was shipped back to Spain. That’s a lot of unnecessary back and forth.

“Therefore, the optimization of the current workflow in cranioplasty remains an open problem, with implant design as primary bottleneck,” they stated.

“Illustration of In-Operation Room process for cranial implant design and manufacturing. Left: a possible workflow. Right: how the implant should fit with the skull defect in terms of defect boundary and bone thickness.”

One option is developing ad hoc free CAD software for cranial implant design, but the design process still requires expertise and an extended wait.

“In this study, we introduce a fast and fully automatic system for cranial implant design. The system is integrated in a freely accessible online platform,” the team explained. “Furthermore, we discuss how such a system, combined with AM, can be incorporated into the cranioplasty practice to substantially optimize the current clinical routine.”

The system they developed has been integrated in Studierfenster, an open, cloud-based medical image processing platform that, with the help of deep learning algorithms, automatically restores the missing part of a skull. The platform then generates the STL file for a patient-specific implant by subtracting the defective skull from the completed one, and it can be 3D printed on-site.

“Furthermore, thanks to the standard format, the user can thereafter load the model into another application for post-processing whenever necessary,” the researchers wrote. “Multiple additional features have been integrated into the platform since its first release, such as 3D face reconstruction from a 2D image, inpainting and restoration of aortic dissections (ADs) [4], automatic aortic landmark detection and automatic cranial implant design. Most of the algorithms behind these interactive features run on the server side and can be easily accessed by the client using a common browser interface. The server-side computations allow the use of the remote platform also on smaller devices with lower computational capabilities.”

3D printing the implants makes the process faster, and combining it with an automated implant design solutions speeds things up even more. The researchers explained how their optimized workflow could potentially go:

“After a portion of the skull is removed by a surgeon, the skull defect is reconstructed by a software given as input the post-operative head CT of the patient. The software generates the implant by taking the difference between the two skulls. Afterwards, the surface model of the implant is extracted and sent to the 3D printer in the operation room for 3D printing. The implant can therefore be manufactured in loco. The whole process of implant design and manufacturing is done fully automatically and in the operation room.”

The cost decreases, as no experts are required, and the wait time is also reduced, thanks to the automatic implant design software and on-site 3D printing. The patient’s suffering will also decrease, since the cranioplasty can be performed right after removal of the tumor.

“Architecture of automatic cranial implant design system in Studierfenster. The server side is responsible for implant generation and mesh rendering. The browser side is responsible for 3D model visualization and user interaction.”

The team’s algorithm, which processes volumes rather than a 3D mesh model, can directly process high dimensional imaging data, and is accessible to users, and easy to use, through Studierfenster. Another algorithm on the server side of the system converts the volumes of the defective, completed skull, and the implant into 3D surface mesh models. Once they’re rendered, the user can inspect the downloadable models in the browser window.

“An example of automatic skull defect restoration and implant design. First row: the defective skull, the completed skull and the implant. Second row: how the implant fits with the defective skull in term of defect boundary, bone thickness and shape. To differentiate, the implant uses a different color from the skull.”

“The system is currently intended for educational and research use only, but represents the trend of technological development in this field,” the researchers concluded. “As the system is integrated in the open platform Studierfenster, its performance is significantly dependent on the hardware/architecture of the platform. The conversion of the skull volume to a mesh can be slow, as the mesh is usually very dense (e.g., millions of points). This will be improved by introducing better hardware on the server side. Another limiting factor is the client/server based architecture of the platform. The large mesh has to be transferred from server side to browser side in order to be visualized, which can be slow, depending on the quality of the user’s internet connection.”

Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below.

The post Free Automated Software to Design 3D Printable Cranial Implants appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

NTU Singapore: Robotic Post-Processing System Removes Residual Powder from 3D Printed Parts

Researchers from Nanyang Technological University in Singapore wrote a paper, titled “Development of a Robotic System for Automated Decaking of 3D-Printed Parts,” about their work attempting to circumvent a significant bottleneck in 3D print post-processing. In powder bed AM processes, like HP’s Multi Jet Fusion (MJF), decaking consists of removing residual powder that sticks to the part once removed. This is mostly completed by human operators using brushes, and for AM technologies that can produce hundreds of parts in one batch, this obviously takes a long time. Manual labor like this is a significant cost component of powder bed fusion processes.

An operator manually removing powder (decaking) from a 3D printed part.

“Combining Deep Learning for 3D perception, smart mechanical design, motion planning, and force control for industrial robots, we developed a system that can automatically decake parts in a fast and efficient way. Through a series of decaking experiments performed on parts printed by a Multi Jet Fusion printer, we demonstrated the feasibility of robotic decaking for 3D-printing-based mass manufacturing,” the researchers wrote.

A classic robotic problem is bin-picking, which entails selecting and removing a part from a container. The NTU researchers determined that 3D perception, which “recognizes objects and determining their 3D poses in a working space,” would be important in building their bin-picking system. They also used a position-controlled manipulator as the baseline system to ensure compliant motion control.

The NTU team’s robotic system performs five general steps, starting with the bin-picking task, where a suction cup picks a caked part from the origin container. The underside is cleaned by rubbing it on a brush, then flipped over, and the other side is cleaned. The final step is placing the cleaned part into the destination container.

Proposed robotic system design for automated decaking.

Each step has its own difficulties; for instance, caked parts overlap and are hard to detect, as they’re mostly the same color as the powder, and the residual powder and the parts have different physical properties, which makes it hard to manipulate parts with a position-controlled industrial robot.

“We address these challenges by leveraging respectively (i) recent advances in Deep Learning for 2D/3D vision; and (ii) smart mechanical design and force control,” the team explained.

The next three steps – cleaning the part, flipping it, and cleaning the other side – are tricky due to “the control of the contacts” between the parts, the robot, and the brushing system. For this, the researchers used force control to “perform compliant actions.”

Their robotic platform made with off-the-shelf components:

  • 1 Denso VS060: Six-axis industrial manipulator
  • 1 ATI Gamma Force-Torque (F/T) sensor
  • 1 Ensenso 3D camera N35-802-16-BL
  • 1 suction system powered by a Karcher NT 70/2 vacuum machine
  • 1 cleaning station
  • 1 flipping station

The camera helps avoid collisions with the environment, objects, and the robot arm, and “to maximize the view angles.” A suction cup system was found to be most versatile, and they custom-designed it to generate high air flow rate and vacuum in order to recover recyclable powder, achieve sufficient force for lifting, and firmly hold the parts during brushing.

Cleaning station, comprised of a fan, a brush rack, and a vacuum outlet.

They chose a passive flipping station (no actuator required) to change part orientation. The part is dropped down from the top of the station, and moves along the guiding sliders. It’s flipped once it reaches the bottom, and is then ready to be picked by the robot arm.

Flipping station.

A state machine and a series of modules make up the software system. The machine chooses the right module to execute at the right time, and also picks the “most feasible part” for decaking in the sequence.

The software system’s state machine and modules perform perception and different types of action.

“The state machine has access to all essential information of the system, including types, poses, geometries and cleanliness, etc. of all objects detected in the scene. Each module can query this information to realize its behavior. As a result, this design is general and can be adapted to many more types of 3D-printed parts,” the researchers explained.

The modules have different tasks, like perception, which identifies and localizes visible objects. The first stage of this task uses a deep learning network to complete instance detection and segmentation, while the second uses a segmentation mask to extract each object’s 3D points and “estimate the object pose.”

Example of the object detection module based on Mask R-CNN. The estimated bounding boxes and part segmentations are depicted in different colors and labelled with the identification proposal and confidence. We reject detection with confidence lower than 95%.

“First, a deep neural network based on Mask R-CNN classifies the objects in the RGB image and performs instance segmentation, which provides pixel-wise object classification,” the researchers wrote.

Transfer learning was applied to the pre-trained model, so the network could classify a new class of object in the bin with a high detection rate.

“Second, pose estimation of the parts is done by estimating the bounding boxes and computing the centroids of the segmented pointclouds. The pointcloud of each object is refined (i.e. statistical outlier removal, normal smoothing, etc.) and used to verify if the object can be picked by suction (i.e. exposed surfaces must be larger than suction cup area).”

Picking and cleaning modules are made of multiple motion primitives, the first of which is picking, or suction-down. The robot picks parts with nearly flat, exposed surfaces by moving the suction cup over the part, and compliant force control tells it when to stop downward motion. It checks if the height the suction cup was stopped at matches the expected height, and then lifts the cup, while the system “constantly checks the force torque sensor” to make sure there isn’t a collision.

Cleaning motion primitives remove residual debris and powder from nearly flat 3D printed parts. The part is positioned over the brush rack, and compliant force control moves the robot until they make contact. In order to maintain contact between the part and the brushes, a hybrid position/force control scheme is used.

“The cleaning trajectories are planned following two patterns: spiral and rectircle,” the researchers explained. “While the spiral motion is well-suited for cleanning nearly flat surfaces, the rectircle motion aids with removing powder in concave areas.”

A combination of spiral and rectircle paths is used for cleaning motions. Spiral paths are in red. The yellow dot denotes the centroid of the parts at beginning of motion. Spiral paths are modified so they continue to circle the dot after reaching a maximum radius. The rectircle path is in blue, parameters include width, height, and direction in XY plan.

The team tested their system out using ten 3D printed shoe insoles. Its cleaning quality was evaluated by weighing the parts before and after cleaning, and the researchers reported the run time of the system in a realistic setting, compared to skilled human operators.

In terms of cleaning quality, the robotic system’s performance was nearly two times less, which “raised questions how task efficiency could be further improved.” Humans spent over 95% execution time on brushing, while the system performed brushing actions only 40% of execution time; this is due to a person’s “superior skills in performing sensing and dexterous manipulations.” But the cleaning quality was reduced when the brushing time was limited to 20 seconds, which could mean that the quality would improve by upgrading the cleaning station and “prolonging the brushing duration.”

Additionally, humans had more consistent results, as they are able to adjust their motions as needed. The researchers believe that adding a cleanliness evaluation module, complete with a second 3D camera, to their system would improve this.

Average time-line representation of actions used for cleaning.

“We noted that our robot ran at 50% max speed and all motions were planned online. Hence, the sytem performance could be further enhanced by optimizing these modules,” the team wrote. “Moreover, our perception module was running on a CPU, implementations of better computing hardware would thus improve the perception speed.”

While these results are mainly positive, the researchers plan to further validate the system by improving its end-effector design, optimizing task efficiency, and adapting it to work with more general 3D printed parts.

Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below. 

The post NTU Singapore: Robotic Post-Processing System Removes Residual Powder from 3D Printed Parts appeared first on 3DPrint.com | The Voice of 3D Printing / Additive Manufacturing.

DeepRC Robot Car is a new kind of Smart Car #MachineLearning #ArtificialIntelligence #SmartPhone #3Dprinting #Robot #DeepLearning #TensorFlow @pyetras @hackaday

From the ‘DeepRC Robot Car’ project on hackaday.io by Piotr Sokólski

 

The ‘DeepRC Robot Car’ project on hackaday.io by Piotr Sokólski aims to create a miniature self-driving car that can be trained at home. Probably the coolest part about this project is that it incorporates a smartphone for a number of the pieces of hardware. For instance, a mirror was used to shift the phone camera view to the front of the vehicle so the car can see the road (see below). The chassis was 3D printed and a number of other small electronics were used to build the car (like the NRF52 SOC).

For controlling the actuators and reading telemetry data a small number of electronic components are installed on the chassis. The main circuit board is based on an excellent NRF52 SOC. It provides a Bluetooth LE radio to communicate with the phone. The servo is controlled by the chip directly, however the motor requires an additional Electronic Speed Controller (ESC).

From the ‘DeepRC Robot Car’ project on hackaday.io by Piotr Sokólski

The software powering the robot was split between an app on the phone and a computer. @pyetras trained the robot to avoid collisions using deep reinforcement learning. Specifically, the TensorFlow agents implementation of the Soft Actor-Critic algorithm was used.

For a Deep Reinforcement Learning algorithm I chose Soft Actor-Critic (SAC)(specifically the tf-agents implementation). I picked this algorithm since it promises to be sample-efficient (therefore decreasing data collection time, an important feature when running on a real robot and not a simulation) and there were already some successful applications on simulated cars and real robots.

The model followed methodology from several projects including “Learning to Drive smoothly” and “Learning to Drive in a Day“. If you would like to learn more about this project checkout @pyetras YouTube video or GitHub.

From the ‘DeepRC Robot Car’ project on hackaday.io by Piotr Sokólski

Optical Machine Learning with Diffractive Deep Neural Networks #MachineLearning #3Dprinting #DeepLearning #NeuralNetworks #TensorFlow @InnovateUCLA

From techxplore.com. Credit: UCLA Engineering Institute for Technology Advancement

 

The Ozcan Lab at UCLA has created optical neural networks using 3D printing and lithography. TensorFlow models were trained on the MNIST, Fashion-MNIST, and CIFAR-10 data sets using beefy GPUs. The trained models were then translated into multiple diffractive layers. These layers create the optical neural network. What the model lacks in adaptability it gains in speed as it can make predictions “at the speed of light” without any power. The basic workflow involves passing light through an input object which is filtered through the entire optical neural network to a detector which captures the results.

…each network is physically fabricated, using for example 3-D printing or lithography, to engineer the trained network model into matter. This 3-D structure of engineered matter is composed of transmissive and/or reflective surfaces that altogether perform machine learning tasks through light-matter interaction and optical diffraction, at the speed of light, and without the need for any power, except for the light that illuminates the input object. This is especially significant for recognizing target objects much faster and with significantly less power compared to standard computer based machine learning systems, and might provide major advantages for autonomous vehicles and various defense related applications, among others.

If you’d like to learn more about Photonics checkout the research happening at the Ozcan Lab. If you’d like more details about diffractive deep neural networks checkout this publication in Science or the most recent Ozcan Lab publication on the topic.

Alchemite Machine Learning Engine Used to Design New Alloy for Direct Laser Deposition 3D Printing

Artificial intelligence (AI) company Intellegens, which is a spin-off from the University of Cambridge, created a unique toolset that can train deep neural networks from noisy or sparse data. The machine learning algorithm, called Alchemite, was created at the university’s Cavendish Laboratory, and is now making it faster, easier, and less expensive to design new materials for 3D printing projects. The Alchemite engine is the company’s first commercial product, and was recently used by a research collaboration to design a new nickel-based alloy for direct laser deposition.

Researchers at the university’s Stone Group, along with several commercial partners, saved about $10 million and 15 years in research and development by using the Alchemite engine to analyze information about existing materials and find a new combustor alloy that could be used to 3D print jet engine components that satisfy the aerospace industry’s exacting performance targets.

“Worldwide there are millions of materials available commercially that are characterised by hundreds of different properties. Using traditional techniques to explore the information we know about these materials, to come up with new substances, substrates and systems, is a painstaking process that can take months if not years,” Gareth Conduit, the Chief Technology Officer at Intellegens, explained. “Learning the underlying correlations in existing materials data, to estimate missing properties, the Alchemite engine can quickly, efficiently and accurately propose new materials with target properties – speeding up the development process. The potential for this technology in the field of direct laser deposition and across the wider materials sector is huge – particularly in fields such as 3D printing, where new materials are needed to work with completely different production processes.”

Alchemite engine

Alchemite is based on deep learning algorithms which are able to see correlations between all available parameters in corrupt, fragmented, noisy, and unstructured datasets. The engine then unravels these data problems and creates accurate models that are able to find errors, optimize target properties, and predict missing values. Alchemite has been used in many applications, including drug discovery, patient analytics, predictive maintenance, and advanced materials.

Thin films of oxides deposited with atomic layer precision using pulsed laser deposition. [Image: Adam A. Læssøe]

“Worldwide there are millions of materials available commercially that are characterised by hundreds of different properties. Using traditional techniques to explore the information we know about these materials, to come up with new substances, substrates and systems, is a painstaking process that can take months if not years. Learning the underlying correlations in existing materials data, to estimate missing properties, the Alchemite™ engine can quickly, efficiently and accurately propose new materials with target properties – speeding up the development process,” said Gareth, who is also a Royal Society University Research Fellow at the University of Cambridge. “The potential for this technology in the field of direct laser deposition and across the wider materials sector is huge – particularly in fields such as 3D printing, where new materials are needed to work with completely different production processes.”

Direct laser deposition – a form of DED – is used in many industries to repair and manufacture bespoke and high-value parts, such as turbine blades, oil drilling tools, and aerospace engine components, like the Stone Group is working on. As with most 3D printing methods, direct laser deposition can help component manufacturers save a lot of time and money, but next generation materials that can accommodate high stress gradients and temperature are needed to help bring the process to its full potential.

When it comes to developing new materials with more traditional methods of research, a lot of expensive and time-consuming trial and error can occur, and the process becomes even more difficult when it comes to designing new alloys for direct laser deposition. As of right now, this AM method has only been applied to about ten nickel-alloy compositions, which really limits how much data is available to use for future research. But Intellegens’ Alchemite engine helped the team get around this, and complete the material selection process more quickly.

(a) Secondary electron micrograph image for AlloyDLD. (b) Representative geometry of a sample combustor manufactured by direct laser deposition. [Image: Intelligens]

Because Alchemite can learn from data that’s only 0.05% complete, the researchers were able to confirm potential new alloy properties and predict with higher accuracy how they would function in the real world. Once they used the engine to find the best alloy, the team completed a series of experiments to confirm its physical properties, such as fatigue life, density, phase stability, creep resistance, oxidation, and resistance to thermal stresses. The results of these experiments showed that the new nickel-based alloy was much better suited for direct laser deposition 3D printing, and making jet engine components, than other commercially available alloys.

Discuss this story and other 3D printing topics at 3DPrintBoard.com or share your thoughts in the Facebook comments below.

Deep Learning Used to Predict Stress in SLA 3D Printed Structures

In a thesis entitled “Deep Learning Based Stress Prediction for Bottom-Up Stereo-lithography (SLA) 3D Printing Process,” a University at Buffalo student named Aditya Pramod Khadlikar describes a method of predicting stress distribution in SLA 3D printed parts using a Deep Learning framework. The framework consists of a new 3D model database that captures a variety of geometric features that can be found in real 3D parts as well as “FE simulation on the 3D models present in the database that is used to create inputs and corresponding labels (outputs) to train the DL network.”

As Khadlikar points out, part deformation and failure during the separation process are common problems encountered in bottom-up SLA 3D printing.

“Cohesive Zone Models have been successfully used to model the separation process in bottom-up SLA printing process,” Khadlikar says. “However, the Finite Element (FE) simulation of the separation process is prohibitively computationally expensive and thus cannot be used for online monitoring of the SLA printing process.”

Therefore, Khadlikar created an alternate method of predicting stress. A convolutional neural network (CNN) was used to develop a deep learning framework that could calculate the stress induced in any layer of a CAD model in real time to assist in online monitoring of the bottom-up SLA 3D printing process. To train the network, a dataset was created using Autodesk Inventor API, and ABAQUS python script was used to carry out FE
simulations on the generated dataset.

Experiments were carried out on multiple samples using the CNN. Several parts with similar cross-sections at a particular layer were examined to see the stress distribution on that layer for a given part. Khadlikar and colleagues discovered that different parts with the same cross-section at a particular layer had different stress distribution at that layer.

“This shows that for non-uniform 3D parts, along with given layer information we need information from the previous layers as well,” Khadlikar says. “This motivated us to develop a new architecture where the stress information of the previous layer is also used for stress prediction for a given layer.”

Stress distribution on the cured layer

An important conclusion reached was that CNN is drastically faster than FEA simulation. The created dataset worked effectively, helping to determine parameters such as peak stress and dependence on previous layer information to determine the stress distribution on a layer. The deep learning model, overall, outperformed the simple neural network model previously used for stress prediction.

“This framework can also be further used for training a larger dataset of 3D parts with varying heights as well,” Khadlikar says. “This framework cannot be used to predict stress on all the layers in a 3D part. This is due to the fact that previous layer stress information to predict current layer stress. Using a prediction of the previous layer to predict current layer stress induces more error due to compounding. Future work will be stress prediction on each layer of the 3D part…A good direction for future research can be incorporating more parameters like the height of slice and pull-up velocity to mimic the 3D printing process more realistically and to get better control over the process.”

Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below.