Fresh approaches to adhesive technology R&D

Afera offered “new perspectives on tapes” in 2nd Session of TechSem 2021

Afera’s 22 April online Session, part of our 4-part 9th TechSem, was moderated by Afera Technical Committee Chairman Reinhard Storbeck, who is also director of R&D at tesa, and included 2 presentations followed by a Q&A section. During Afera’s 90-minute webinar, Professor Steven Abbott discussed smart mapping of PSA space—including eliminating some traditional tests—for formulators. Then Kalli Angel described developing next-generation adhesives through combining the right data infrastructure with machine learning. Below, each presenter has contributed an article on their subject:

Smart mapping of PSA space: minimum measurements for maximum understanding

by Prof. Steven Abbott, Director of Steven Abbott TCNF, Ltd.

A PSA formulator has 2 key scientific issues. The first is that many in the community do not know the core science behind PSAs or what it means in terms of their formulations. The second is that even those who have all the right science cannot directly go from science to product.

The talk therefore was divided into 2 sections. The core science section relies heavily on the free apps on my Practical Adhesion website. These will help you better understand the ideas. The talk starts with the idea that surface energy is 4 orders of magnitude too small to be relevant, so naïve ideas of surface energy have to be abandoned. Then we can start to think of what really happens at the surface – especially the first 10nm where a relaxed PSA polymer is likely to provide stronger adhesion than one with lots of built-in stress.

There is then a chain of logic via small-strain G’/G’’ measurements. The Dahlquist criterion depends on the surface roughness – but roughness is more that Ra and Rz; as the Surface Profile app explains, you need amplitude and wavelength values to apply Dahlquist. Gathering G’/G’’ data over a modest range of time and temperature allows you to produce a full “ideal PSA” over 10 orders of magnitude via WLF. The Ideal PSA app makes it possible to explore the effects of, say, plasticisers and tackifiers on key properties. Again, using G’/G’’ values, you can use the Chang Window app to place your current formulation in PSA space, acknowledging that if you are outside the appropriate window you will fail, but being inside does not guarantee success. That is because tensile properties are vital. The PSA world is not (yet) in the habit of measuring these – which is a big mistake. You cannot formulate without being able to compare the tensile properties.

The Peel Test often produces meaningless data, because the physics behind the measurements is misunderstood. The Peel app shows how/why the modulus and thickness of the backing material, as well as the thickness of the adhesive, affect peel via influencing events ahead of the peel front.
The standard Shear Test is meaningless, because in practice the Test often fails in peel. This has been known for a decade, and yet the Test continues to be used as a measurement of “shear”. Because it is important to know the real shear properties of the PSA, you can use the Creep or the Squeeze Test, whichever is best for you. All 3 are described in an app.

The Probe Tack Test famously does not relate to our core needs of peel and shear. Till recently it was impossible to interpret the results in terms of fundamental values. The recent “simple” probe tack model from ESPCI is described in another app. Recent work from ESPCI shows how probe and peel might be related, although currently it requires a specialist Peel Test and some complex science. This will be described in the next session.

This brings us to the second part of the talk. By gathering data that relates to the science behind PSA performance, it is possible to start map the data against good and bad PSA performance. That mapping can be done via the formulation team and good old-fashioned human brain power. Or it can be probed via high-quality data analysis techniques—which is the subject of the following talk by Kalli Angel of Uncountable.

For those who want a fuller explanation via a “popular science” approach (so it is an easy read), my book from the Royal Society of Chemistry, Sticking Together: the Science of Adhesion, explains how PSAs work, has its own YouTube channel to bring the science to life, including a video of PSAs in action when my granddaughter takes me hostage with duct tape.

About Prof. Abbott
Prof. Steven Abbott was for many years a research director in the high performance coatings industry. For the past decade he has been an independent consultant, author, lecturer and app writer trying his best to make usable science available to the formulation industry. His 2015 book Adhesion Science: Principles and Practice includes links to a set of adhesion apps to bring the science alive. His popular science book Sticking Together: The Science of Adhesives was published by the Royal Society of Chemistry in 2020 and features its own YouTube channel as a way to make the ideas easier to grasp.

Combining the right data infrastructure with machine learning to develop next generation adhesives

by Kalli Angel, director of account management in Europe, Uncountable, Inc.

A strong focus on sustainability and tight cost competition pushes adhesives R&D teams to accomplish more with the same resources. Adding to the challenge, many tenured scientists plan to retire in the coming years, taking with them instincts honed from decades of experience and leaving new graduates in their place starting the learning journey from the beginning.

The obvious solution is a shift toward data-driven innovation, but it is rarely easy to gather, connect, and analyse experiment data, especially when many R&D teams are still tracking their work in custom Excel sheets saved on personal hard drives across an organisation. Investments in organisational changes, paired with software designed for complex R&D environments, will help even the largest companies overcome these challenges.

For the last 3 years, Uncountable has partnered with materials companies to drive faster innovation, leveraging machine-learning algorithms optimised for these high-dimensional yet low-data environments. Along the way, we have developed best practices for determining the right long-term data infrastructure, managing change toward a consistent data culture in R&D, teaching scientists to collaborate effectively with new AI tools, and setting up a global organisation to take full advantage of machine learning in the innovation process now and in the future. Read more in Ms. Angel’s article Connecting R&D data: how to unlock the potential of laboratory resources here. Learn more about Uncountable’s technology here and resources here.

About Ms. Angel
Kalli Angel is director of account management in Europe for Uncountable, whose AI-powered web platform accelerates R&D at leading material companies around the globe. Previously she led GLG's corporate markets team in the EMEA region, working on key challenges for strategy and innovation teams, with a focus on the chemical and pharmaceutical sectors. She has designed successful technology implementation and change management initiatives, both internally and for clients. Ms. Angel is a graduate of Yale University, where she earned a B.A. in comparative literature.

Q&A section

Smart mapping of PSA space: minimum measurements for maximum understanding
Do you have experience in this kind of testing with water-based acrylic PSAs? The tests that Prof. Abbott ran when they were trying to find the difference between 2 PSAs which were the same G’/G’’ but had very different tensile properties happened to be water-based. So, yes, he does have experience, but it does not change his overall message.

Could you explain why the Shear Test is bad science and Creep and Squeeze Shear Tests are better? In classic adhesion, there is a well-known test called the Lap Shear Test, in which you pull an adhesive in pure shear and measure the shear failure. It is well-known that the Lap Shear Test fails in peel, so it is actually a Peel Test, so it is the same as the standard Shear Test. This was proven in the French experiments Prof. Abbott discussed. So if you want to understand about shear behaviour, you have to do a test, which is actually a Shear Test, such as the Creep or Squeeze Tests mentioned in the talk.

Concerning water-based PSAs, have you observed evidence of migration of low-molecular-weight fragments or surface-active ingredients migrating to the interface layer and impacting polymer relaxation behaviour and the peel propagation? What are the effects of this? Prof. Abbott said that in the whole of adhesion (and not just PSA), everyone says things migrate to the interface with very little evidence, and indeed there is often no reason why a molecule should go to the interface, because the interface is often not very different from the bulk phase. Migrating to an open surface is a very different thing from migrating to an adhesive interface, and it is difficult to know how much is migrating. But he strongly believes that the more the Industry tries to find ways of looking at what is happening in those few, maybe 10, nanometres, a large chunk of adhesive behaviour will advance a lot. Because the Industry has been based so much on surface energy, these kinds of questions have not been asked, but they should be.

Which kinds of tests would you recommend for the calibration of material models for finite element analysis? What is your recommendation if you are doing such calculations? Prof. Abbott said this was smarter and deeper analysis that he was aiming at, so the audience was beginning to see the tradeoffs of Shear and Peel and the impacts of other things like the thickness of the material. If this is a natural part of your company’s DNA, you will gain a deeper understanding than most.

In terms of the preparation of samples, should testing be done on dried adhesives? Yes, Prof. Abbott said. The word “dried” is dangerous with water-based adhesives, because of course there is drying and film formation. If you dry it, surfactants come to the surface before you have performed the test. If that is not how you are making your adhesive, then you are dealing with a very different beast. So again, you have to think of the science of what is going on: Water-based saves the planet, but it does come with lots of additional burdens that complicate things. The general answer is that you have to be smart about what you are measuring when and not to be too perturbed by stuff migrating to the surface in air if that is not what your manufacturing process is.

What is the barrier when it comes to surface tension? Many companies grapple with this topic. Prof. Abbott related that everyone has been taught that adhesion starts with surface energy. If you read any book except his, he said, they all discuss the importance of surface energy being important to adhesion, but this is nonsense. A typical surface energy of anything is 40 mN/m. An ordinary PSA is 400 N/m. That is 4 orders of magnitude difference. So something of 4 orders of magnitude smaller is not of any great significance compared with all the other factors. This argument applies to all the adhesives Prof. Abbott knows of—not just PSAs. Surface energy arguments have never gotten anyone anywhere. Long ago, just like everyone else, when there was an adhesion failure, he would rush into the lab to measure surface energies. When he began writing his first book, Prof. Abbott realised not once in his entire career had measuring a surface energy helped him solve any adhesion problem. Gradually people are beginning to realise that they have been taught nonsense for 40 years. Adhesion is all about entanglement if you have real adhesion and about energy dissipation when it comes to PSAs.

Combining the right data infrastructure with machine learning to develop next generation adhesives
Ms. Angel used predicted peel adhesion versus actual peel adhesion as an example of one of many targeted outputs from a trained model. The question was raised, however, whether failure mode might be of influence. Would it also be possible to create a model that would only use good failure mode data points and exclude poor failure mode data points? When you are working with machine learning models in which you are giving in that training set or experimental data, Ms. Angel said, the failures (in this case “poor failure mode”) as well as the successes are very important, because what machine learning can do well is say “this is a bad result”. Understanding why it may be a bad result is just as important as coming to “this is a good result that we want to replicate.”

In terms of the training set, which is the data you are giving into a machine learning model, ideally you had both good and bad results that are measured in similar ways. But when we are talking about cohesive failure—and we do work with labs that measure cohesive failure percent-wise, targeting a numeric value to ensure that it is at least 90% cohesive failure, for example—our machine learning models can also work with categorical markers for cohesive failure. In this case, you would have multiple targets (e.g. “Cohesive Failure”, “Adhesive Failure” and “Mixed Failure”) and are perhaps even looking at adhesion on multiple substrates for your customer. For each of those substrates, you have relevant acceptable failure modes, as well as the peel adhesion result, that you are targeting, and in this way you would set the goals that you are trying to achieve for future projects or formulations. The model would consider all of these things in balance to try to give you new suggested formulations that would provide you with both the adhesion you seek for your desired substrates and the cohesive failure or failure mode you are also looking for.

Regarding large data sets existing already in-house, do you suggest harvesting this historic data or should a company start from scratch and regenerate the data? There is no easy answer to this, Ms. Angel related. I might recommend a bit of both, depending on what the historical data looks like. Answering this will always require a cost-benefit analysis of the time it would take a company or a third-party contractor to structure the data in a way to make it useful. Some companies we work with have relatively structured databases of past results. With some data cleaning and analysis, this can easily be put into a new platform, so it can be used and searched through. Sometimes the data is literally sitting in lab notebooks, and we have found that data does have a half life. In 6 months from now, the data from 3 months ago is going to be more valuable than the data from 3 years ago. This may be because materials or lab equipment have been changed, even slightly, or there has been a turnover in scientists. Even though the tests are standardised, the way that each individual performs them may be different.

So old data tends to have more noise. A good balance Ms. Angel has seen in some labs is taking data from a few years ago in Excel spreadsheets—only if they are working on a particular project in which it may be relevant—and structuring and introducing it into the platform to learn from alongside newer data. When she is working with a new customer, one of the first things she does is look at where they have existing structured data and use that as a foundation on which to build a data ecosystem.

How do you handle multiple optimisation targets, e.g. costs and stiffness, which do not fit in one cost function? Ms. Angel used one of their demo environments for rubber’s ecosystem to illustrate what the results of one of their design exercises might be. The exact reason to use Gaussian process models and Bayesian optimisation is to balance multiple, often competing priorities. She showed how to share targets with the model such as tensile strength, viscosity and rheology, and then also programming in cost, which could be pulled from the underlying ingredient attributes in the platform. Then the model balances by creating an objective function of all the different priorities and weights that you have assigned to them, along with how much it can learn about what actually drives these different results from the training set you are using. Then all is this can be used to create potential experiments to be conducted next.

Sponsors

Afera is grateful to all of the sponsors of the 9th Technical Seminar:

Slides and recording

If you were a registered participant in the TechSem and would like to download the recording and visuals for this Session, please contact the Secretariat.

Go to Afera’s sustainability pages
Learn more about Afera Membership

 

 

 

 

Interested in learning more about us?

Subscribe to our newsletter.