1 Introduction

With the current rapid development of technology and art, Bio-Inspired Design (BID) as a biological knowledge-driven design and method has been widely used in various fields, including mechanical design, product design, architectural design, and the medical field [1, 2]. BID is an important innovative design method and innovative design mindset in industrial design, also known as bionics or biomimicry in research [3], and it has been shown that taking inspiration from nature can lead to innovation in product design in the real sense [4]. Product bionic design has become an important way for designers to give emotion, using creatures in nature as bionic objects and combining design rules to incorporate biological features into the product shape so that users have a corresponding psychological mapping, which can also stimulate the imagination of users and create some emotional resonance [5].

Increasingly, more and more designers are now taking inspiration from nature, extracting, classifying, and reconstructing biological features according to design tasks and needs, as a way to design creative biomimetic products with a biological flavor [6]. For example, bionic design was significant for car styling, contributing to the birth of aerodynamics and the streamlining of cars, and since then has been one of the necessary tools for car designers, creating a heroic era of car design. As shown in Fig. 1, the Beetle car launched by Volkswagen in Germany extracts the backline of the Beetle and applies it to the design of the car. The Jaguar has been designed with a strong bionic visual effect, with the car’s center of gravity set back in the likeness of a jaguar. The designers at Mercedes-Benz have designed and produced the Bionic, one of the bionic concept cars with a wind resistance of just 0.19, based on the shape of a boxfish.

Fig. 1
figure 1

Different biologically inspired bionic products with different car styling

However, for the product designer, the BID is usually the result of the designer's own experience and inspiration, reflecting his design intentions into the innovative product design with a certain degree of ambiguity and inapplicability in the design process. Research has combined the theory of inventive problem solving (TRIZ), biological coupling, design-by-analogy (DbA), and genetic algorithms to obtain knowledge from nature, transfer knowledge from biology to engineering, and generate innovative products design solutions to achieve a quantitative analysis of the product shape bionic design process. Currently, the central problem with BID is the difficulty of visualizing the fusion of abstract biological inspiration with figurative product shapes. The deep generative (DG) model concept was introduced into the BID to solve this problem.

With the development of artificial intelligence (AI) technology, DG is being used in design automation to improve the design team’s performance through co-creation with AI [7]. We try to combine BID with DG to solve the challenges in product shape bionic design and promote the rapid development of bionic products by starting from the morphology of creatures and products and using deep learning technology to complete the key step of bionic design—bionic fusion. In this article, firstly, we introduce the concept of DG and BID as well as related research. Secondly, an image morphing technique based on StyleGAN is used for the aim of visualizing and generating bionic solutions, and an exploratory deep generative bio-inspired design model (DGBID) at the intersection of disciplines is proposed. Then, using the contour line of the bionic solution generated by DGBID as a reference, the designer participates in the optimization design of the bionic solution in the form of sketches, and uses style migration technology to transform the hand-drawn sketch into a realistic design solution, and optimizes the bionic in the way of human–computer interaction. At the same time, considering that a single DGBID as a tool for the automatic generation of bionic solutions, does not constitute a complete bionic system, although it inspires the creative thinking of designers as well as avoids the simple reproduction of sources of inspiration. We have combined perceptual engineering and eye-movement testing to create a multidisciplinary product bionic framework, which allows for a collaborative human–machine approach to the entire bionic design process, offering the potential to explore a wide range of design possibilities, foster creativity, and combine objective and subjective needs. Finally, taking the bionic design of the side view of a car as an example, a new bionic scheme is generated visually according to the framework, as well as a bionic optimization scheme of co-creative and completing the bionic evaluation.

This article contributes as follows:

  1. (1)

    A DGBID is proposed by combining BID with DG models to facilitate the rapid development of bionic products. Compared to existing solutions, the method proposed in this paper produces high-quality bionic fusion images.

  2. (2)

    Based on StyleGAN’s image morphing technique gradually visualizes the potential mapping between bionic products and bionic objects, proposing a new way of bionic fusion.

  3. (3)

    In bionic shape optimization, the designer is involved in the bionic shape optimization design in the form of sketches, proposing a co-creative approach to the rapid generation of bionic product effects.

2 Research Background

2.1 Bio-inspired Design

No stranger to the design neighborhood, BID as a design methodology has been shown to significantly improve the sustainability of design outcomes and inspire ground-breaking innovative ideas that can be further developed into new patents [8]. The BID allows for cross-disciplinary knowledge mapping to bridge the differences between several disciplines. Julian et al. [9] developed the BioTRIZ bionic approach by combining bionics with the basic principles of the TRIZ to identify biological inspiration in the paradoxes solved through structural and information strategies. Edison et al. [10] proposed the concept of biological coupling, defining biological function as the organic combination of several interdependent factors as a result of a specific coupling mechanism to reveal the nature of the biological function. Mak et al. [11] proposed a hierarchy of forms, behaviors, and principles to present biological phenomena as potential analogies, using biological analogies to motivate design ideas. Bian Z et al. [12] proposed the bidirectional encoder representations from transformers (BERT) pretraining model was used to calculate the semantic similarity of product description sentences and biological sentences so that designers could choose the high-ranked results to finish bionic reasoning. Li et al. [13] proposed a data acquisition method based on intuitionistic fuzzy sets considering different customer preference distributions. In describing the application of bionics to industrial design, BIRKELAND et al. [14] first proposed the introduction of genetic engineering algorithms into industrial design, pointing to a new direction for the bionic design of digital product shapes.

Based on the literature review, it can be seen that more and more researchers are now focusing on digital research based on product shape bionic design. Although it can help designers to generate innovative bionic solutions more easily and quickly, it is difficult to visualize the integration of abstract biological inspiration and figurative product shapes. To explore new ways of visualizing product shape bionic design, research into product shape bionic design needs to be combined with theoretical knowledge from other disciplines.

2.2 Deep Generative Model

Generative design is a design exploration method in which design options are automatically or interactively assisted by algorithms in a generative design system, and although it may only be a part of the overall design, its key design elements are generated by algorithms [15, 16], with the help of which thousands of different design options are automatically generated to assist humans in exploring the oversized design space that cannot be covered by humans [17]. Incorporating objective engineering requirements into subjective conceptual designs fosters designers’ creativity and innovation. In recent years, the rapid development of deep learning, represented by neural networks, has led to the introduction of deep generative models (DG), which can provide design teams with a more intelligent way to obtain design solutions due to their strong automatic learning capabilities [7]. Currently, deep neural network architectures such as generative adversarial networks (GAN) [18], variational auto-encoders (VAE) [19], and autoregressive models [20] are used in image generation. Among them, GAN based on deep learning has achieved great success in areas such as generative data [21], texture synthesis [22], video generation [23], image-to-image conversion [24], and style migration [25, 26], making GAN the most effective image generation technique available.

A GAN model contains two sub-networks: the generator network and the discriminator network. The generator is designed to generate images that the discriminator cannot determine to be true or false, and the discriminator is designed to distinguish whether the incoming images are true or false. In the process of model training, the generator and the discriminator will each update their parameters to minimize losses, and eventually reach a Nash equilibrium state through continuous iterative optimization, so that the generator can generate images that the discriminator cannot determine, then the model is optimal. The objective function of the GAN is defined as:

$$\mathop {\min }\limits_{G} \;\mathop {\max }\limits_{D} V(G,D) = E_{{X\sim P_{{{\text{data(}}x{)}}} }} [\log D(x)] + E_{{z\sim P_{Z(Z)} }} [\log (1 - D(G(z)))],$$
(1)

where x is a natural image extracted from the real data distribution \(p_{{{\text{data}}}} (x)\), and z is a potentially random vector sampled from a uniform distribution.

Due to the excellent generative power of GAN, more and more scholars are applying GAN to DG to obtain design solutions more intelligently. Yang et al. [27] used DRAGAN to automate anime character creation. Ling et al. [28] propose a Co-creation drawing system based on GAN that allows humans and machines to collaborate to generate high-quality cartoons. Yong et al. [29] proposed a method for generating design solutions based on GAN to facilitate the development of intelligent industrial design. Yuan et al. [30] developed a GAN model for product image generation, DA-GAN, to automatically generate images of fashion products with the desired visual attributes. Hong et al. [31] improved the original StyleGAN model to develop a model that can efficiently generate high-quality icons, combined with StyleGAN’s visual quality, these properties gave rise to unparalleled editing capabilities [32]. In summary, it can be seen that several designers have combined GAN networks in deep generative models to conceptualize their ideas and make the abstract visible, providing a theoretical basis as well as a source of inspiration for our exploration of bionic design as an abstract process.

2.3 Deep Generative Bio-inspired Design Model

As the currently outstanding performance of the DG model of GAN in image generation, studies have also found that: StyleGAN, a derivative network of GAN, with its image morphing technique can perfectly fit the requirement of a smooth transition from product image to biological image, and can visually express the potential mapping relationship between biological shape and product shape, solving an important problem in the innovative design of bionic forms. Therefore, this paper proposes the concept of DGBID based on StyleGAN’s image morphing technology. DGBID is a tool that uses deep learning technology to generate bionic solutions from the perspective of computer vision, exploring a reasonable and effective mapping strategy for the integration of biological shapes and product shapes, helping product designers to visually generate new product design solutions, transforming the machine from a supporting role to a content. This allows designers to focus more on conceptualizing product ideas and promotes the rapid development of bionic products. After DGBID has completed the stylistic integration of the product with the creature, the automatically generated solution needs to be further optimized and the process of optimizing the bionic solution is intentionally made interactive rather than automated, taking into account that human participation is an indispensable element of the creative design process. At the same time, it seek a collision between human and artificial intelligence technology in terms of inspiration. Embedding creativity into bionic products in a co-creative approach to improve the versatility, effectiveness, and accuracy of bionic product design.

3 Method Overview and Implementation

3.1 Method Overview

To prevent the bionic process from being divorced from functional, structural, and semantic constraints, we build a multidisciplinary product bionic framework based on StyleGAN’s image morphing technique as a way to implement DGBID, combined with perceptual engineering and eye-movement experiments. The overall framework is shown in Fig. 2, which has four modules: bionic matching, biometric feature extraction, DGBID (bionic fusion), bionic optimization, and evaluation. Establishing a system for the semantic cognitive matching of bionic products to the models of living organisms and the science of bionic integration.

Fig. 2
figure 2

Methodology framework

(1) Bionic matching: Starting from the imagery cognitive coupling of bionic objects and bionic products, the bionic objects with the highest perceptual-cognitive coupling with bionic products are selected after quantitative analysis. (2) Biometric feature extraction: Starting from the coupling of bionic objects and bionic products with morphological cognition, we use eye-movement experiments to complete the extraction of biological shape features. (3) DGBID: Using deep learning technology—StyleGAN’s image morphing visualization to generate a bionic solution to complete the key step of product shape bionic design—bionic fusion. (4) Bionic optimization and evaluation: From the generated solutions of DGBID, the designer initially selects the fusion images with bionic value and then carries out detailed optimization of the design, by outlining the contour lines of the solution as a sketch reference and improving the model’s features of the product on the sketch as appropriate. Then, the sketches are migrated to the commercially available product image styles and filled with the inherent styles that the product should have. Finally, the three indicators of aesthetics, similarity, and practicality were used to evaluate the bionic solution, and the best bionic solution was selected by scoring on a five-point Likert scale.

3.2 Method Implementation

3.2.1 Bionic Matching

According to Gestalt psychology, the perception of a visual object remains constant when the object changes within a certain range, a phenomenon known as perceptual constancy. Perceptual constancy includes size constancy, shape constancy, brightness constancy, and color constancy. At the same time, the principle of perfective convergence in Gestalt psychology indicates that the human brain homogenizes, regularizes, and simplifies meaningless graphics and matches and understands them with those that make sense in their memory. If the product bionic design process selects a biological source domain that is similar in shape to the target domain product for design, it will maintain consistency in the user’s perception of the shape. The theoretical basis for bionic form matching is to improve the recognizability within the variable range of product forms. At present, the matching of target and source domains mainly uses perceptual imagery matching methods. First, the product imagery vocabulary is mined through SD questionnaires, and the semantic difference method [33] is used to quantify the imagery cognitive scales of bionic products and bionic creatures. In this way, the relationship between the imagery vocabulary was obtained using the Person correlation coefficient to characterize the imagery perception distance between the bionic product and the creature. Finally, the creature with the smallest image-perception distance is used as the bionic object. The formula for calculating Person is shown below:

$$\rho (X,Y) = \frac{{E[(X - \mu_{X} )(Y - \mu_{Y} )]}}{{\sigma_{X} \sigma_{Y} }} = \frac{{E[(X - \mu_{X} )(Y - \mu_{Y} )]}}{{\sqrt {\sum\nolimits_{i = 1}^{n} {(X_{i} - \mu_{X} )} } \sqrt {\sum\nolimits_{i = 1}^{n} {(Y_{i} - \mu_{Y} )} } }}.$$
(2)

The Pearson correlation coefficient is used to quantify the similarity between the two vectors and takes a range of [0,1]. The higher the value, the higher the correlation.

3.2.2 Biometric Feature Extraction

Feature recognition is an important means of revealing the potential use of biological knowledge in engineering. Biometric features are extracted through eye-movement tests, which visualize the user’s eye movements in real-time during cognitive processing [34], and using visual perception is the main way to decode biosignature information, so research on user visual perception can better guide design work. Eye-tracking technology is an important method for the numerical realization of perceptual information about visual perception. Through eye-tracking experiments, the pupil and the trajectory of the subject’s vision are measured, and the combined qualitative and quantitative analysis methods can explore the underlying human mental activity and thoughts. Eye movement experiments were used to explore the gaze characteristics of users when observing sample images, and common eye-movement data (gaze duration, number of turns) were analyzed to characterize users’ psychological reflections during cognition based on the obtained eye-movement hotspot maps and eye-movement data [35]. This paper uses eye-movement experiments to identify areas of interest in creature models and presents an eye-movement experiment for extracting the features of creature models to improve the efficiency of conveying biological imagery. It further helps designers to quantify constraints on the bionic design process for better weighing the relationship between figurative expression and technical constraints.

3.2.3 Bionic Fusion

At the heart of product bionic designs is the fusion of biological form and product form, which is the basis of product form bionic design and the supporting technology for product form bionic design, influencing the quality and speed of innovative product design, selection, and combination. The traditional bionic design generally relies on the designer’s understanding, manually extracting morphology, structure, and other features from the bionic design, which is somewhat ambiguous and unadaptable, for which the solution generation is inefficient. This paper builds a DGBID model based on the StyleGAN image morphing technique to effectively solve this problem and provide a tool to visually generate bionic design solutions to obtain the best design solution using artificial intelligence.

StyleGAN is an improved version of GAN networks, also consisting of a generator network G and a discriminator D. StyleGAN follows ProGAN's Discriminators and improves on its Generators. Its improved network architecture for generator G aims to reduce feature entanglement as well as improve control over style. StyleGAN generator G consists of two parts: a mapping network, and a synthesis network [36]. As shown in Fig. 3.

Fig. 3
figure 3

StyleGAN network structure model

The specific roles are shown below. First, to achieve feature disentanglement, the mapping network in the generator G is made up of eight fully connected layers that encode the input vector as an intermediate vector, which is then transformed into a feature code \(y = (y_{s} ,y_{b} )\) (style control vector) by a learnable affine transformation. Controls the Adaptive Instance Normalization (AdaIN) [37] operation after each convolutional layer in the synthetic network, the AdaIN operation is defined as:

$${\text{Ada}}\;{\text{IN}}(X_{i} ,y) = y_{s,i} \frac{{X_{i} - \mu (X_{i} )}}{{\sigma (X_{i} )}} + y_{b,i} .$$
(3)

Second, to achieve improved style control and training efficiency, the synthetic network of StyleGAN borrows the generative network structure model of ProGAN. Basic parts of the image are first created by learning basic features that also show up in low-resolution images, which are learned with increasing detail as the resolution increases with time. At the same time, low-resolution images are not only simple and fast to train, but they also contribute to higher levels of training, improving overall training efficiency, in addition to their ability to control different visual features of an image if used properly. Based on the above principles, StyleGAN enables the synthesis network of the generator to consist of eight resolution layers. The intermediate vectors are subsequently passed to the synthesis network with 16 feature codes, which act on each of the 8 resolution layers in the synthesis network (the generator transforms from 42 to 82 and finally to 5122). Each resolution layer is controlled by two feature codes, with features 1 and 2 controlling the feature pattern of the 42 resolution layer, and so on. These features are divided into three types according to the different resolution layers and resolutions of the feature code control; coarse, middle, and fine resolution layers. The corresponding feature codes for resolution layers, which control the different features of the image, are related as shown in Table 1. In this way, StyleGAN can control the specific features of the generated images and improve the training efficiency.

Table 1 Relationship between resolution layers and feature code, controlled image resolution, and image features

As StyleGAN is unsupervised using two random hidden codes to generate images and perform style blending, it is not possible to modify the style between the given images. In response, StyleGAN-encode proposes a solution: an efficient embedding algorithm (Select a random initial potential code and optimize it using gradient descent [39, 40]). This allows mapping a given image into the pre-trained StyleGAN extended latent space W+ [38]. Optimizing the hidden vector in W+ space can reconstruct any image, regardless of whether it is the image style in the training set or not, and the reconstructed embedded image has a very high visual quality [41]. To measure the similarity between the input and embedded images, the StyleGAN-encoder uses a loss function with a combination of VGG-16 perceptual loss, pixel-by-pixel MS-SIE loss, and LPIPS loss weighting (the smaller the loss value, the more similar the embedded image is to the input image). In this case, the specified image is embedded into the potential space of StyleGAN using the StyleGAN-encoder. Then, it is made possible to compute the transformation by linear interpolation given two embedded images with their respective latent vectors W1 and W2 [38]:

$$\omega = \lambda \omega_{1} + (1 - \lambda )\omega_{2} ,\lambda \in (0,1).$$
(4)

And subsequently, generate images using the new encoding W. The linear calculation of the StyleGAN-encoder enables different levels of style blending between two images to be specified, we use this technology to achieve a smooth transition from product models to bionic objects. Because this paper mainly uses this technique to achieve the purpose of affine transformation in models, so that the generated affine solution has certain changes in macro models, in the linear interpolation calculation, mainly change the coarse layer and middle layer of latent vectors W1 and W2 (feature codes 1–8).

3.2.4 Bionic Optimization and Evaluation

Different product images and biological images are transformed by linear interpolation calculations to produce different bionic solutions. To select the bionic value of the DGBID generated solutions, the designer was then asked to make an initial selection from the generated solutions and then refine the selected solutions in the form of sketches. The sketch and the premium style product images are then embedded into the potential space of StyleGAN, and the style of the premium images is then populated onto the sketch using style migration techniques (In Eq. (4), \(\lambda\) taking 1/2 achieves the result of style migration [38]). The different solutions optimized by the different designers are organized into a cluster of bionic solutions and evaluated one by one. However, the evaluation of design has always been considered an important issue in design research. Few engineering design problems are completely free from subjective demands, and the need for human input and the existence of multiple design options is inevitable. At the same time, most real design solutions have multiple competing and conflicting goals, there is no single ‘best’ design, only many trade-offs between needs, therefore again requiring humans to evaluate multiple options.

In the evaluation of bionic design, Vandevenne et al. [42] selected four indicators to measure the efficiency of bionic design: quantity, variability, innovation, and quality. Keshwan et al. [43] compared the quality of bio-inspired generated programs and brainstorming generated programs using two metrics: degree of innovation and degree of abstraction. In this paper, the Delphi Method was used to evaluate the bionic design. Five experts were invited to anonymously submit their opinions on the evaluation indexes of the bionic design proposal, and after consolidating the opinions of the experts of the same kind, the results of the first round of opinions were sent to the experts for debate and evaluation. Finally, three evaluation indexes were established: aesthetics, similarity (similarity to the creature in the source domain), and practicality. The five-point Likert scale will be used for scoring when using the three indicators for evaluation.

4 Experiment

4.1 Bionic Matching

To verify the feasibility of the method in this paper, this study uses car side view models as a bionic product and uses existing bionic objects on the market: jaguar, beetle, shark, horse, boxfish, and bull as the biological contour data set. By consulting users and designers on their feelings about the car and the six bionic objects, the 87 styling-related perceptual imagery words were further categorized and compressed by the KJ method to exclude words with similar imagery and then group the words with opposite meanings into pairs. A total of seven pairs of public perceptual terms were obtained to represent users’ perceptual evaluations: “individualistic—popular”, “dynamic—quiet”, “simple—elaborate”, “rounded—hard”, “stiff—soft”, “heavy—light”, “harmonious—unbalanced”. The car and six bionic samples were then randomly numbered and combined with the imagery vocabulary into a questionnaire format, scored on a seven-point Likert scale to reflect the subject’s interpretation of the perceptual imagery of the samples. The questionnaire was divided into two parts, the first part asked the subjects to select a scale of perceptual words for cars and six bionic creatures. The second part asked the subjects to rank their liking for the six bionic creatures. Selected college design students and professional designers as the subjects, 16 women, 14 men, a total of 30 people, with an average age of 25 years old. The results of the mean analysis are shown in Fig. 4.

Fig. 4
figure 4

Average perceptual image of survey subjects

Then, use the average results obtained to study the correlation between cars and jaguars, beetles, sharks, horses, boxfish, and bulls. The Pearson correlation coefficient is calculated using Eq. (2) to indicate the strength of the correlation. The results of the calculated Person correlation coefficient with the mean value of liking are expressed as shown in Fig. 5.

Fig. 5
figure 5

Correlation coefficient and popularity ranking of bionic objects and cars

Based on the results in Fig. 4, compare the imagery perception coupling of cars with six creatures and the liking ranking of the creature samples. It can be seen that the creature with the highest coupling with the perception of car imagery is the shark, with a correlation coefficient of 0.907 and a p value of 0.005 less than 0.01, which is an extremely significant correlation, next is the jaguar with a correlation coefficient of 0.808 and a p value of 0.028 less than 0.05, a “significant” correlation. Meanwhile, the ranking of the biological samples in terms of liking shows that the highest ranking in terms of liking is that of the jaguar with 2.4, followed by sharks with 1.5. Finally, according to the overall ranking of the cognitive coupling of imagery and the popularity rating, the shark imagery ranked the highest in cognitive coupling and the popularity rating was higher, so it was determined that the shark was the bionic object of the car.

4.2 Biometric Feature Extraction

After determining through perceptual engineering that the shark was the most compatible bionic creature for the car, the side view of the shark was selected and the extraneous background was removed and made to a resolution of 512*512. The final result is shown in Fig. 6(a). To improve the effectiveness of the bio-imagery message, the key bio-form is retained and the irrelevant bio-form is eliminated. Using eye-tracking technology, analyze the gaze characteristics of designers when observing pictures of biological model samples. However, before the experiment started, only the features of the shark shape were retained to exclude extraneous variables such as color, and the shark shape was filled with white. Pre-processed as shown in Fig. 6(b) as a sample for the eye-movement experiment. Next, three product stylists were invited to participate in the experiment, which is shown in Fig. 6(c). The eye-movement hotspot maps and gaze trajectory data from the three eye-movement experiments were aggregated onto a single image, and the final eye-movement test results are shown in Fig. 6(d). The data from the eye-movement experiment showed that the designer's psychological response to the perception of the shark shape was mainly focused on the torso and the fins on the back, while the tail and the fins on both sides of the torso were given less attention. Therefore, in the biomimetic design of the product, the fins on both sides of the shark’s torso and the tail were eliminated, the main part of the shark's torso was retained, and the final shark shape was retained as shown in Fig. 6(e).

Fig. 6
figure 6

Overall flow diagram of the eye-movement experiment. a initial shark image, b sample eye-movement experiment image, c experimental procedure, d eye-movement experiment result image, e shark models result in image after feature extraction

4.3 DGBID (Bionic Fusion)

Once the key styling features of the shark have been extracted, the StyleGAN model needs to be trained for the car images as samples. To verify the validity of the method, the existing LSUN CAR [44] was selected as the dataset. The dataset contains 893 k images of cars and the final trained StyleGAN model has FID = 3.27 and Perceptual Path Length (PPL) = 1485. Then, we invited three product designers who had participated in the eye-tracking experiment to take one side view of each of the popular, sports and SUV cars from the market. Taking three-car side view and shark side view shapes as inputs, the experimental flow of the three designers are shown in Fig. 7(a) (b) (c), respectively, with the input images placed at the left and right ends.

Fig. 7
figure 7

Experimental map of StyleGAN-based bionic fusion. a, b, c are three designers participating in the experimental process of bionic fusion

Embedding all input images into the potential space W+ with the loss values of the embedding shown in Fig. 8. According to the analysis of the data, the three-car images converge at 2000 iterations, while the shark image gradually converges at 3000 iterations. Therefore, the embedded images were generated using 2000 iteration steps for car images and 3000 iteration steps for shark images. The embedding images generated for each set of experiments and the linearly interpolated images of the affine scheme at \(\lambda\) from 1/16 to 8/16, respectively, are shown in Fig. 6.

Fig. 8
figure 8

The relationship between the loss value of the input image and the number of iteration steps

Based on the generated results, it can be found that the bionic scheme generated by DGBID is not perfect in detail. For example, in the car shapes generated in Fig. 7, some of the inherent shapes of the car tires, mirrors, windows, and other products are missing. Therefore, it is not a wise choice to use the DGBID generated image directly as the final bionic result. As this research focuses on design-oriented shape bionic design, the contour lines of the DGBID generated images are brought out through Bessel curves during the experimental process, gradually visualizing the gradual transition of the shark to the car in shape, providing a theoretical basis and inspiration for the designer to optimize the bionic product in the form of sketches in the next step. The contour maps extracted from each group of experiments were placed below the automatically generated images, as shown in Fig. 7. Then, each designer separately needs to extract the contour shape drawing with the highest bionic value in the experiment that generates the solution. The analysis of the experimental results in Fig. 7 shows that when \(\lambda\) taken as 1/16 and 2/16, there are fewer embedded car shapes and the generated results maintain the inherent shape characteristics of the shark, not of bionic value. When \(\lambda\) taken as 7/16 and 8/16, there are more car shapes embedded and too few shark features are incorporated, which also does not have bionic value. The three designers involved in the experiment were then asked to select the solution with the most bionic value from the linear interpolation results of \(\lambda\) taking 3/16, 4/16, 5/16, and 6/16, respectively, and the final selection is shown in red borders in Fig. 7.

Meanwhile, to demonstrate the superiority of the bionic effect generated by the StyleGAN-based image morphing technique, we compare it with the currently existing bionic fusion method, which is based on the image morphing technique of feature points (Morphing) to generate the bionic scheme[45], with the following algorithm flow: firstly, the side pictures of the shark and the car are labeled with feature points, as shown in the input of Fig. 9, and a total of 16 pairs of feature points are selected to correspond to each other; then, using the feature points and the centroids of each side of the picture and the corner points of the picture, Delaunay triangulation is performed to obtain the triangular network shown in Fig. 9. The new triangle network is obtained by finding new point positions for each point in the two triangle networks according to the parameter \(\lambda\). The new triangle network is bilinearly interpolated for each triangle pair in the new triangle network by affine transformation to obtain the new picture, as shown in Fig. 9, which shows the effect of the new picture obtained when \(\lambda\) is taken as 1/16–8/16.

Fig. 9
figure 9

Contrast test—–biomimetic scheme generated by feature point-based image morphing technique

A comparison of the results generated in Fig. 9 with those in Fig. 7 reveals that the StyleGAN-based image morphing technique produces a clear and intelligent approach to bionic effects, which is significantly superior. However, the feature point-based image morphing technique is less complex and does not require training on a large number of images as a dataset, which is worth learning from in this respect.

4.4 Bionic Optimization and Evaluation

Each designer separately selected the framework of the scheme with bionic value as shown in Fig. 10(a). Then, the designers are asked to adjust the overall framework of the car’s shape or improve other details in the form of sketches. However, to prevent the sketching from being divorced from the model framework of the generated solution, the model framework of Fig. 10(a) is expressed as a green line as a starting layer for sketching, and the designer uses the starting layer as a reference to start sketching, the final result is shown in Fig. 10(b). Next, standardize the sketch drawn by each designer, extract the vector lines of the sketch and fill the outline uniformly with white. The results of the vectorization process are shown in Fig. 10(c), which was used as the source for the final bionic optimization of the shape. At the same time, suitable styles were selected from the market as a source of styles for the final bionic optimization. The car styles selected by the designer, respectively, are shown in Fig. 10(f). Finally, since the car images converge at 2000 iterations, the StyleGAN-encoder is used to embed the images of Fig. 10(c) and Fig. 10(f) into the potential space of StyleGAN with iteration steps of 2000. Finally, the final bionic optimization results were generated using StyleGAN’s style transfer technique (\(\lambda\) taken as 1/2 for linear interpolation) as shown in Fig. 10(e).

Fig. 10
figure 10

Flowchart of bionic optimization. a selection of images of frames with bionic value, b sketches made by the designer under the frame, c sketches after vectorization, e images of the results of style migration, f images of target car styles selected from the market

Using the solution generated in Fig. 10(e) as the result of the bionic optimization, each experimental result is combined into a solution cluster, and the results of the bionic solution cluster are evaluated by taking three indicators: aesthetics, similarity, and practicality. A five-point Likert scale, with one for not meeting and five for very meeting, was used to evaluate each indicator. Twenty-two designers aged 23–32 with at least 5 years of industrial design background were invited to evaluate the results of the experiment in Fig. 10(e). The final average of the scores obtained for the three indicators for each program and the evaluation results are shown in Fig. 10. According to the experimental results in Fig. 11, it can be seen that option (b) has the highest aesthetic and similarity scores but less practicality, option (c) has the best practicality but less aesthetic and similarity, and option (a) scores between option (b) (c). Taking into account the aesthetics, similarity, and practicality of the imitation, option (b) was chosen as the best result of the experiment. The whole product styling bionic design is completed here with an example of car styling.

Fig. 11
figure 11

Evaluation results of the biomimetic scheme. a, b, c Results of the final bionic scheme optimized for the three designers, respectively

However, to further validate the methodology of this paper, three designers involved in the entire experimental process were invited to evaluate the methodology of this experiment. The experimenter marveled at the results of the bionic scheme generated using DGBID. Visualization of the gradual change from car shape to shark shape through deep learning techniques. But while some designers say that DGBID often returns unpredictable results that make it difficult to understand its logic, others feel that the uncertainty makes working with DGBID inspiring and stimulating. Although the designers were divided in their opinions of the DGBID generation solution, all three participating designers were amazed at how well this experiment used human–computer interaction to optimize the results, generating their bionic ideas as sketches to quickly generate real product images, allowing the computer to do the tedious task of designing instead of them. Finally, the designers marveled at the effect of applying AI technology to the design field, which surprisingly makes design so simple and easy, and looked forward to more AI technology being applied to bionic design.

5 Discussion

In this paper, the bionic creature that best matches the bionic product is selected by perceptual engineering and the eye-movement experiment. After that, the StyleGAN model is trained by machine learning a large number of images of the product, and then the potential connection between the bionic product and the bionic creature is visually expressed using StyleGAN’s image morphing technique to generate a new bionic fusion scheme. Finally, the designer uses the sketch to quickly generate real product images with co-creative to complete the bionic solution optimization process. Compared with the existing methods, the method in this paper is more intelligent and automated to generate the bionic fusion scheme, using AI technology to visualize the abstract bionic design process, making the generation of the scheme more convincing. At the same time, it embeds the designer’s creativity into the bionic product in a co-creative way and improves the diversity, effectiveness, and accuracy of bionic product design. This method is also an innovative exploration of traditional product model design in AI technology.

6 Conclusion

This paper is dedicated to the visualization of the bionic design process and proposes a human–machine co-creation of the DGBID method, which can visually represent the potential mapping relationship between the biological shape and the product shape, which we can perfectly achieve through StyleGAN-based image morphing techniques at the bionic fusion experimental step. By comparing the results generated in Figs. 7 and 9, although, the quality of the results generated by the bionic scheme in this paper is much higher than the other schemes. However, the StyleGAN-based image morphing technique in this paper is difficult to train and technically demanding compared to the feature point-based image morphing technique and requires training from a large number of datasets to generate good quality results, which is an unavoidable drawback of the method in this paper. Therefore, in our future work, we are committed to researching deep generative models based on fewer samples to generate high-quality bionic fusion solutions even with as few training samples as possible, so that designers can conceptualize their ideas in a short period in conjunction with AI technology, reducing the number of redundant steps in the design process and creating products with bionic value.