Ilovepdf Merged 2
Ilovepdf Merged 2
ScienceDirect
Procedia CIRP 93 (2020) 868–872
www.elsevier.com/locate/procedia
Abstract
Machining vibrations is a critical phenomenon in the industry as they negatively affect the quality and tool-life. One common avoidance strategy for
machining vibrations is the fine-tuning of process parameters, thus leading to longer production time. Our research addresses this challenge and
uses different streams of data to classify problematic processes. Data streams of machining parameters, tool position, loads, vibration sensors,
together with process plan data and cutting tool usage information, are visualized. Experiments are performed to derive classification criteria. These
results are then used to observe vibrations in a five-axis machining center for further process adjustment.
© 2020 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 53rd CIRP Conference on Manufacturing Systems
Keywords: sensor integration; data fusion; manufacturing data analytics; cyber-physical manufacturing;
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Darya Botkina et al. / Procedia CIRP 93 (2020) 868–872 869
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
870 Darya Botkina et al. / Procedia CIRP 93 (2020) 868–872
Different data streams are combined using LabVIEW Virtual In our classification we consider the frequencies of the tool
Instrument to control the accelerometers and convert analogue and the machine tool using FFT. For this application the clas-
voltages to digital values. These values are stored in CSV-files sification criterion distinguishes between stable and unstable
and sent to a Python application via a TCP protocol. Addition- signals. The stable signal has a tight concentration of a few fre-
ally, data from the CNC machine’s PLC is sent to the same quencies with peak magnitudes below 0.01 m/s2 . The dominant
application via the tweeting machine and the MTConnect agent. frequencies are the harmonic ones, and the sums of harmonic
For each data source a thread of live data is created in the Python and non-harmonic with peaks that do not exceed 0.0025 m/s2 .
application. Data are matched by timestamps. An unstable process exhibits more harmonic frequencies (more
Timestamps of data streams from MTConnect and LabVIEW than 7) and dominant non-harmonic ones, see Fig. 3.
are formatted identically to enable a comparison. The data from
the two streams are saved in a new variable whenever the times-
tamps match. The LabVIEW data stream may have a rather fine
sampling rate for which more than one timestamp per millisec-
ond may occur. In this case, if the application detects several
identical timestamps, it selects the first one for matching. This
procedure is repeated throughout the measurement cycle. The
logic is presented in Fig. 2.
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Darya Botkina et al. / Procedia CIRP 93 (2020) 868–872 871
Fig. 4: Manufacturing of the Boxy test piece with various manufacturing operations
To establish a sufficiently large database, 25 Boxys were Lastly, the data were analysed for consistently appearing
manufactured and the fused data streams were recorded as con- frequencies that are present whether an operation is ongoing or
tinuous time series. It is beyond the scope of this paper to identify not. This was done to find out if the machining center naturally
performance criteria for all the different manufacturing opera- produces certain frequencies simply from running in idle mode.
tions. However, for demonstration purposes, the authors present
the experiments on slot milling which is an integral part of the
pocketing operation. 6. Outlook
The vibration measurements were performed with a sampling
rate of 2000 Hz in the machining center, using an end mill. Two The provided framework presents an approach to manufac-
extra similar end mills were provided to investigate vibrations turing data acquisition, processing and analysis with data from
due to a used and a new tool. various data sources. This approach allows for a better descrip-
The end mill was gauged for an initial depth of 3 mm for the tion and deeper understanding of results and behavior, as well
first seven out of nine pieces. The tool cut a straight line towards as to provides the possibility to track the process conditions.
the higher cutting depth. To avoid different cutting conditions The solution gives a method to identify critical machining
and to engage both sides of the cutting tool, a 2mm separation conditions. At the same time there challenges and limitations
was left between the tracks. Different cutting conditions were remain, among them the following:
created by varying the spindle speed and the feed rate.
The machining data of several Boxys was analysed as follows.
1) The current sampling rate of the MTConnect of 2HZ is
Raw data from the CSV files were searched for peaks in the
not sufficient for high machining speeds.
amplitudes of every of the six axes measured. Those peaks were
2) Our criterion is not applicable to a classification of chatter,
further compared with other measurements of the same setup but
such classification would require further modelling.
for different Boxys. If a peak appears consistently, a correlation
3) The dynamics between housing and the spindle could be
can be made with the tool penetrating the material for the first
valuable for the vibration analysis. This would definitely
time. A peak or another anomaly might be a sign of changing
influence a formulation of any universal criterion.
forces within the cutting process leading to chatter.
A second method used were waterfall plots. Waterfall plots
Our analysis should be included as a first step for any follow-
consist of FFTs of the raw data stacked up according to the time
ing work. Accuracy can be improved by measuring matching
of the data. A Hanning window with 0.5 overlap was applied,
parameters and tool positions at higher sample rate.
enclosing a defined number of values to create conditions more
Using such system on an industrial PC continuously can
suited for the FFT. This approach allows for an identification of
improve immediate response to the wrong machining param-
shifting frequencies over time. Such shifts were correlated with
eters. In addition to that, the application of machine learning
setups, their regularity within the data was analyzed.
algorithms could give new insights into data. Another technical
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
872 Darya Botkina et al. / Procedia CIRP 93 (2020) 868–872
improvement can be the Hilber-Huang transformation, which [20] Wu, Z., Huang, N.E., 2009. Ensemble empirical mode decomposition: A
has been used in a related research. noise-assisted data analysis method. Advances in Adaptive Data Analysis
doi:10.1142/S1793536909000047.
[21] Yeh, L.J., Lai, G.J., 1995. A Study of the Monitoring and Suppression
Acknowledgements System for Turning Slender Workpieces. Proceedings of the Institution
of Mechanical Engineers, Part B: Journal of Engineering Manufacture
doi:10.1243/PIME_PROC_1995_209_077_02.
This work is supported by VINNOVA project 2017-05208 [22] Zhang, X.Y., Lu, X., Wang, S., Wang, W., Li, W.D., 2018. A multi-sensor
Nationell Testbädd Smart Produktion. based online tool condition monitoring system for milling process, in: Pro-
cedia CIRP. doi:10.1016/j.procir.2018.03.092.
References
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Available online at www.sciencedirect.com
ScienceDirect
Procedia CIRP 93 (2020) 891–896
www.elsevier.com/locate/procedia
Abstract
Artificial intelligence (AI) gains importance in many domains and may soon be transferred decision-making responsibilities in production
management from production managers. For the future, it will be vital to identify each entity’s domain of decision-making superiority. Therefore,
this paper proposes and applies a model to assess AI performance in contrast to human decision-making. Relying on reinforcement learning and
item response theory, the approach describes a minimum viable setup for AI systems to identify opportunities for AI systems in manufacturing.
The model is based on operative production management decisions (job-shop scheduling) and validated through a series of academic scheduling
instances.
© 2020 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 53rd CIRP Conference on Manufacturing Systems
Keywords: Production; Artificial intelligence; Scheduling, Performance; Reinforcement Learning; Item Response Theory
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
892 Peter Burggräf et al. / Procedia CIRP 93 (2020) 891–896
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Peter Burggräf et al. / Procedia CIRP 93 (2020) 891–896 893
3.1. Design of the production system 𝑞𝑞(𝑠𝑠𝑡𝑡 , 𝑎𝑎𝑡𝑡 ) ← 𝑞𝑞(𝑠𝑠𝑡𝑡 , 𝑎𝑎𝑡𝑡 ) + 𝛼𝛼 (𝑟𝑟𝑡𝑡+1 + 𝛾𝛾 max 𝑞𝑞 (𝑠𝑠𝑡𝑡+1 , 𝑎𝑎𝑡𝑡 ) − 𝑞𝑞(𝑠𝑠𝑡𝑡 , 𝑎𝑎𝑡𝑡 )) ȋͳȌ
𝑎𝑎
To put forward our assessment methodology, for this study Herein rt+1 is the reward incurred from executing the action
we selected low-complexity production tasks, namely static, in the environment, 𝛼𝛼 and 𝛾𝛾 are control parameters: 𝛼𝛼 (𝛼𝛼 <
non-reactive, single-agent job-shop scheduling problems 0 ≤ 1) is the learning rate. 𝛼𝛼 → 0 indicates no learning while
(JSSP) with perfect information and a relatively small solution 𝛼𝛼 → 1 gives higher precedence to future rewards. Similarly, 𝛾𝛾
space. The planning horizon is short so as to minimize controls the far-sightedness of the agent: for 𝛾𝛾 → 0, the agent
intertemporality issues, and sufficient time and (human) only considers current rewards (rt), whereas 𝛾𝛾 → 1 leads to
intelligence should be available. maximization of long-term gains [21]. As our implementation
We choose to regard a JSSP for the following reasons: they of the scheduling instances interprets the problems as
represent well-studied and important PM decision-making continuous tasks without terminal state, 𝛾𝛾 should not be 1 in
problems (e.g. see [20]), static look-ahead schedules can be order to discount future rewards to prevent them from
considered a perfect–information problem, and the problem accumulating infinitely.
size is scalable. As JSSPs can as well be modelled using an MDP approach
[22] and RL is known to be a good tool for solving JSSPs, see
3.2. Design of decision-making AI [20,22–24], we regard RL as a relevant AI technique for our
purpose.
This approach seeks to lay the groundwork for a generic
model that allows one to measure an AI’s performance on a 3.3. Item response theory
given task. Many AI and machine learning techniques, i.e.
supervised and unsupervised learning, rely on large amounts of As we seek a design that is relevant for different sets of
data from which to generate models. Another subset of AI tools tasks, a methodology that requires large background databases
is reinforcement learning (RL)[21]. RL can help to mimic and with human scores is unfit for this purpose. Furthermore,
automate the sequential human-decision making process. The benchmark results from arcade games seem unfit as their tests
goal is to solve a problem by finding the best policy of action, require uniformity across problems for comparability. The
irrespective of the deeper structure of the experience gathered performance assessment methodology has to cope with
during interaction with the environment. Thus, RL does not different scales of complexity in PM environments. IRT
require performing an intermediate system identification measures problem difficulty and the ability required for
process, enabling good learning results without the need for decision-making, which is why we have selected IRT as our
much prior knowledge of the problem [21]. method to compare human decisions to AI-made ones. IRT is
RL problems are usually modelled via a Markov Decision a psychometric test set originally designed to measure
Process (MDP) as depicted in Fig. 1. MDPs consist of a tuple examinees’ ability through a test with several questions (i.e.
(S,A,P,R) of which A represents all possible actions a that the items). It is a set of mathematical models that describe the
algorithm can use to change its state [21]. Every achievable relationship between a latent trait of interest, which is not
state for the RL problem is found within the state space S. For directly measurable, and an examinee’s answers to individual
every action a taken in state s a specific reward 𝑟𝑟(𝑠𝑠, 𝑎𝑎, 𝑠𝑠 ′ ) ∈ 𝑅𝑅 items, where the probability of a response for an item is a
arises which leads to an overall increase of return R. function of the examinee’s ability [25]. The basic 3-parameter
𝑃𝑃(𝑠𝑠 ′ , 𝑟𝑟 | 𝑠𝑠, 𝑎𝑎) describes the probability to reach a state s’ with IRT model, calculates the probability P of a correct response
reward r given current state s and taking action a. Additionally, Ui = 1 on an item i given the examinee’s ability θj as a logistic
a value function Q : S × A → ℝ indicates the expected value function:
of a state or state-action pair [21].
1−𝑐𝑐𝑖𝑖
𝑃𝑃(𝑈𝑈𝑖𝑖 = 1|𝜃𝜃𝑗𝑗 ) = 𝑐𝑐𝑖𝑖 + (−𝑎𝑎 (𝜃𝜃 −𝑏𝑏 ))
(2)
1+𝑒𝑒 𝑖𝑖 𝑗𝑗 𝑖𝑖
Agent
This model provides the item characteristics curve (ICC) for
each item tested and is described by the following parameters:
State Reward Action
St Rt
• Discrimination a: slope of the function at location
At
parameter b. Higher values indicate higher changes in item
Rt+1 response for small changes in ability;
St+1 Environment • Difficulty b (also called location parameter): describes the
point where an examinee has a 50% probability of
Fig. 1: The agent–environment interaction in a Markov decision process [21]
answering correctly (assuming no guessing);
• Guessing c: probability for correct answers for examinees
One distinct technique in RL is Q-learning. It is an off- with low ability.
policy learning algorithm, i.e. the policy being followed (often
epsilon-greedy) is different than the one being learned as q 4. Experiments
values q(st,at). Here, at is the action the agent takes on state st.
For each training epoch, the q-value is updated according to the To test the assessment model, we conducted a brief
following rule: experiment in which students competed against a simple AI.
We selected two academic scheduling problems as experiment
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
894 Peter Burggräf et al. / Procedia CIRP 93 (2020) 891–896
cases that are used in the chair’s exercises and lectures to train The answers were coded dichotomously, i.e. 0 or 1. The data
students in the field of operations research. In general, JSSPs were processed in R using the library ‘ltm’ [26]. Table 2 shows
consist of n jobs to be processed on m machines. For our study the results of the parameter calculations.
we regard the following two problems: the first (denoted as
‘Case A’) consists of atomic jobs (i.e. no sequence of Table 2. Overview over human cohort results from IRT parameter estimation.
operations) and lowest combinatorial complexity which, in an Parameter Case A Case B
n x m JSSP, has n=5 jobs and m = 1 machine. The problem has Discrimination a 5.3612 6.5651
to be solved by finding a sequence that has the least cumulative Difficulty b 1.0057 -0.0260
delay (minimum tardiness) regarding a target due date for each Guessing c 0.0679 0.3989
job. The second problem (‘Case B’) is a 5x3 Flow Shop
Scheduling Problem. The operations sequence is fixed for each Based on the participants’ answers, the results show that
job (M1→M2→M3). In contrast to Case A, the criterion for Case A (5 atomic jobs) was significantly harder to solve than
sequence optimality is makespan minimization. Each job can Case B. Notably, there is an almost 40% chance of scoring
only be passed on to the next machine after finishing operation correctly on Case A by guessing. The discrimination parameter
on the previous one. also scores lower in Case A than in case B, which indicates a
The key to solving these problems without exhaustive narrower spread in correct answers across the participants. The
computing resources is the knowledge of different heuristics mean ability of the respondents is computed to 𝜃𝜃𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 =
which had been taught to the students in lectures and exercises. 0.1363. The probabilities for correct answers in turn are
For the 5x1 problem, a suitable heuristic can be shortest process PA(𝜃𝜃𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 ) = 0.4045 for Case A and PB(𝜃𝜃𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 ) = 0.7613 for
time (SPT), while the 5x3 problem can be solved via Johnson’s Case B.
algorithm. For both cases, the optimal sequence was known to Fig. 2 displays the plots of the ICCs that correspond to the
the examiners. above parameters. The black curve represents Case A, and the
The overall logic of the experiment is as follows: in a red curve represents Case B. The vertical blue line represents
uniform sample of examinees each one answers two operations the mean ability. The horizontal dotted lines indicate the
research tasks with varying difficulty. The examinees are success probability for each case with respect to the computed
assumed to have different knowledge levels regarding possible participants’ mean ability 𝜃𝜃𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 .
heuristics to solve the tasks, meaning they all have a different
ability θ. Given different difficulties for Case A and Case B,
the mean ability over the entire sample will yield different
probabilities P(Ui=1) of finding the optimal sequence.
Subsequently, we will run the RL experiment to estimate the
number of training epochs required to match the human
probability scores. Details of the setup and results for both
cohorts are presented in the following two sub-sections.
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Peter Burggräf et al. / Procedia CIRP 93 (2020) 891–896 895
and d < e to swap jobs corresponding to the stated numbers P(U1/2=1) =1. From the ICCs in Fig. 2 we see that these match
within the action. For instance, applying the action a(1,3) to state an ability of 𝜃𝜃 → ∞. A 100% chance of finding the correct
ABCDE will result in state CBADE. Alternatively, the agent sequence is achieved at ability scores around 2.
can choose use the (d, d) action, referring to an idle action,
since the one job would be swapped with itself, resulting in the
same sequence. While the (d, e) actions are used to search for
a better sequence of jobs, the ‘idle’ action is meant for high-
reward states, in which the agent is uncapable of improving the
sequence, hence yielding a better reward for keeping the
current job sequence. This way, the agent will learn the optimal
policy of swaps in order to reach the state corresponding to the
idle action where the highest reward is incurred by doing
nothing, which is described as q* (“Sopt”, “11”).
Rewards for taking an action on state s are attributed as
follows: for Case A, the cumulative delay on all jobs is fed back
as (negative) reward. The optimal sequence incurs 0 delay and
thus a reward of 0. Earlier finishes did not incur any positive Fig. 3: Results of RL training (circles), logarithmic regression (dashed lines),
reward. As zero rewards are to be avoided in order to make the and no. of training samples at human probability score (solid lines)
optimal sequence stick out from an otherwise sparsely
populated matrix, a small positive reward of 10 is given for 5. Discussion
delays of zero at the end of the episode instead. The rewards
for Case B are shaped similarly: The optimal sequence The results presented in the previous section show that very
minimizes makespan of all jobs, which is a number known a low training effort is required to match human ability for this –
priori. (Negative) rewards are computed as difference between purposefully simple – production case. As the RL ability scores
optimal and actual makespan, with the optimal sequence again surpass the human scores, we infer that the specific PM
incurring 0 reward which is being increased to 10. decision presented herein can securely be submitted to an AI.
For the experiments, we increased the amount of training At the core of this paper was a demonstration for Q-learning to
samples from 10<n<1,000 in 10 step increments. For each be a viable option for low-complexity production decisions with
round, the algorithm computed K=22 epochs to match the factorial alternatives, low intertemporality and perfect
amount of human participants. After each epoch, the state information. We induce that for similar production
corresponding to the maximum q-value for action a(1,1) was management decisions for which the same criteria apply, AI
output. Training was done with the control parameters 𝛼𝛼 = 0.2, will be equally able to outperform human decision-makers.
𝛾𝛾 = 0.8 and 𝜖𝜖 = 0.3. Furthermore, by expanding our setup with more complex
As this experiment did not require any expensive cases as well as different algorithms in the future, we can
computations and could be performed on a regular office compose a solution space that identifies decision-algorithm
laptop, the following comparisons of results refer to the outputs pairs with par-human, high-human or even greater ability
only. We omitted a cost analysis on the input side while scores, meaning that these PM tasks are automatable.
acknowledging that this is an important aspect for future work.
For the comparison of human to AI performance, a uniform Limitations and future research needs
assessment metric is required. IRT results are computed as
probability of correct response. Thus, we express the While we assert that our performance assessment
probability of correct answers as share of the right job methodology can play a vital role in the design of future Cyber
sequences found by the AI. The computation rule for the RL Production Management Systems with respect to the crucial
algorithms probability rating is given in Eq. 3. decision, which PM decisions should be automated with AI
support, we acknowledge that the approach and results
∑𝐾𝐾
𝑖𝑖 𝑅𝑅
presented herein are only a first step. Our research was subject
𝑃𝑃(𝑈𝑈𝑖𝑖 = 1|𝑛𝑛𝑘𝑘 ) = 𝑘𝑘
(3) to the following methodological limitations that require further
𝐾𝐾
consideration in the future:
Eq. 3 calculates the probability of finding a correct sequence First, following previous approaches in AI performance
for Case i given the number of training samples n in participant research [15–18], we employed item response theory to model
k as the sum of correct responses Rk per episode divided by 22. the mean ability of respondents. In this case, IRT only uses
Rk was assigned the value 1 if response of k corresponded to the dichotomous data (correct or incorrect answers) which we
known optimal sequence and 0 otherwise. expect to not be applicable at higher order problems. However,
Fig. 3 shows the training results. The Q-learning algorithm IRT also provides models for polytomous items such as Likert
exhibits next to equal performance on both cases, indicated by scales or normalized data. Future research should thus
the dashed logarithmic fit lines. Human performance for Case incorporate grading or assessment schemes that allow to
A (black, ~40% chance of answering correctly) is met after distinguish answers on a more delicate scale, i.e. normalized to
about 40 samples. For Case B (red, ~76% chance), 220 samples the best answer given or as fuzzy sets.
are needed. After a full training cycle (1,000 epochs), the Second, our sample of human participants consisted of
probability of correct answers both cases converges to production management students exclusively. The latent trait
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
896 Peter Burggräf et al. / Procedia CIRP 93 (2020) 891–896
we estimated from the responses described the knowledge of ȏʹȐ ¡ǡ ǡȋʹͲͳͺȌ
ǣ
heuristics to solve the problem. It remains to be investigated Ǥ2018 International Conference on Information Management and
how a sample of manufacturing experts would perform on the Processing (ICIMP)ǡǤͺʹ–ͺͺǤ
ȏ͵Ȑ MartınezǦ ǡ Ǧ ȋʹͲͳȌ ʹͲͲ
same problems and whether their performance would yield ǣ ǤEGPAI, Evaluating General-
substantially different outcomes in terms of training samples Purpose Artificial Intelligence ͵͵Ǥ
ȏͶȐ ǡ ǡ ǡ ǡǡ ǡ
required for the Q-learning algorithm. ǡ ǡǡ ȋʹͲͳȌ
Third, we stress once more that demonstrating AI usefulness Ǥnature
ͷʹͻȋͷͺȌǣͶͺͶǤ
for production research is not the purpose of this paper, but ȏͷȐ ǡȋʹͲͳͻȌ ǤScience
demonstrating AI power in relation to human problem-solving ͵ͷȋͶͷȌǣͺͺͷ–ͻͲǤ
ȏȐ ȋʹͲͲͺȌǦǣ
skills. In fact, the problem cases used herein are too simple to ǦǤComputers and GamesǤǤǡ Ǥ
warrant the use of an AI. Further research on this topic will ȏȐ Ǥȋ ͳʹǤͲͳǤʹͲͳͻȌǤThe SSDF
rating listǤǣȀȀǤǤȀȋ ʹͺǤͲͳǤʹͲʹͲȌǤ
therefore use more elaborated RL techniques as well as more ȏͺȐ ǡ ǡǡ ǡ ǡ ǡ
sophisticated state spaces to test generalization ability of the ǡǡǡȋʹͲͳȌ
algorithms as well. Ǥnature ͷͷͲȋȌǣ͵ͷͶ–ͻǤ
ȏͻȐ ǡǡ ǡǦ ǡǡ Ǧ ȋʹͲͲͻȌ ǣǦ
Lastly, we used only one single parametrization. Different ǤIEEE Conference on Computer Vision and Pattern
parameter settings in one algorithm may lead to different Recognition, 2009: CVPR 2009 ; 20 - 25 June 2009, Miami [Beach], FL, USAǤ Ǥ
ǡ ǡǤʹͶͺ–ʹͷͷǤ
results, possibly even diverging drastically in between cases. In ȏͳͲȐ ǤThe Quartz guide to artificial intelligence: What is it, why is it
order to produce a more generalizing statement of AI-vs- important, and should we be afraid? ǣȀȀǤ ȀͳͲͶ͵ͷͲȀǦǦ
ǦǦ Ǧ ǦǦǦǦǦǦǦǦǦǦǦ
human performance, a sensitivity analysis of algorithm ǦȀȋ ʹͺǤͲͳǤʹͲʹͲȌǤ
parameters could be used in the future. ȏͳͳȐ ǡ ǡǡȋʹͲͳȌǣͳͲͲǡͲͲͲΪ
ǤarXiv preprint arXiv:1606.05250Ǥ
To overcome these limitations, the authors are creating an ȏͳʹȐ ǤAI Beats Humans at Reading Comprehension, but It Still Doesn’t
actor-critic based RL algorithm for reactive job-shop Truly Comprehend Language. ǣȀȀǤ Ǥ ȀǦ
ȀͲͻͻͺȀǦǦǦǦǦ ǦǦǦǦ
scheduling that is able to handle more complex cases. ǦǦ Ȁȋ ʹͺǤͲͳǤʹͲʹͲȌǤ
Simultaneously, we are designing a user-friendly interface to ȏͳ͵Ȑ ǡ ǡǡǡ ǡ ǡ ǡ
ǡ ǡ ȋʹͲͳͷȌ Ǧ
deploy online and in lectures to make the problem more Ǥnature ͷͳͺȋͷͶͲȌǣͷʹͻ–͵͵Ǥ
accessible to a wider audience. ȏͳͶȐ ǡǡ ǡȋʹͲͳ͵Ȍ
ǣǤJournal of Artificial
Intelligence Research Ͷǣʹͷ͵–ͻǤ
6. Conclusion ȏͳͷȐ MartınezǦ ǡ² ǡÀǦǡ Ǧ ȋʹͲͳȌ
Ǥ Ǥǡ
ǡǡȋǤȌǤECAI'16: Proceedings of the Twenty-second
AI vs. human performance assessment is not entirely new. European Conference on Artificial IntelligenceǡǤͳͳͶͲ–ͳͳͶͺǤ
Most assessments rely on benchmark results from known, large ȏͳȐ ǡ ǡ ȋʹͲͳȌ
ǤProceedings of the Conference on Empirical Methods in Natural
databases to classify an AI’s performance in relation to human Language Processing. Conference on Empirical Methods in Natural Language
participants. As these benchmarks need labelled training data, Processing ʹͲͳǣͶͺ–ͷǤ
ȏͳȐ ǡ ǡǡ ȋʹͲͳȌUnderstanding Deep Learning
the tested domain is usually very narrow. Assessments that Performance through an Examination of Test Set Difficulty: A Psychometric Case
compare AI and human abilities on more general sets of tasks StudyǤ
ȏͳͺȐ ÀǦ ǡ Ǧ ȋʹͲͳͺȌ
are scarce. Some work exists on using IRT to introduce ǣ ǤarXiv preprint
uniformity and a representation of levels of difficulty in image arXiv:1811.08186Ǥ
ȏͳͻȐ ȋʹͲͳͷȌEntscheidungslehre: Wie Menschen entscheiden und wie sie
recognition and games scoring. entscheiden sollten. ͳͲǤ ǡ Ǥ
The novelty aspect of this paper is to adopt the IRT approach ȏʹͲȐ ǡǡǡ ȋʹͲͳͺȌ
to compare an RL algorithm against human performance on ǤCIRP Annals
ȋͳȌǣͷͳͳ–ͶǤ
two simple job-shop scheduling examples which represent low- ȏʹͳȐ ǡ ȋʹͲͳͺȌReinforcement learning: An introductionǤ
complexity decisions in PM. We demonstrated that an AI’s Ǥ
ȏʹʹȐ ǡȋʹͲͲͺȌ Ǧ
performance on these tasks can be approximated based on ǤInternational Journal of Information
human responses to the same case. Technology and Intelligent Computing ʹͶȋͶȌǣͳͶ–ͺǤ
ȏʹ͵Ȑ Àǡ±ǡ ǡȋʹͲͳͳȌ
Our AI, based on RL, has reached par-human scores after a Ǥǡ
small number of training epochs. From the findings we induce ȋǤȌǤLearning and intelligent optimization: 5th international conference, LION
5, Rome, Italy, January 17 - 21, 2011 ; selected papersǤǤǡǤʹͷ͵–
that these results will be replicable for similar PM decisions. ʹʹǤ
However, future research will have to operationalize human ȏʹͶȐ ǡ ǡǡ ȋʹͲͳͺȌǣ
Ǧ
and AI performance assessments for PM tasks of varying Ǥ2018 10th International Conference on
complexity. With the presented experiments, we have taken a Communication Systems & Networks (COMSNETS): 2018 10th International
Conference on Communication Systems & Networks (COMSNETS) took place 3-7
first step towards the concept of Cyber Production January 2018 in Bangalore, IndiaǤ Ǥ ǡ ǡǤͳʹͻ–ͳ͵Ǥ
Management. ȏʹͷȐ ǡȋʹͲͳ͵ȌItem response theoryǤ Ǥ
ȏʹȐ ȋʹͲͲȌǣ
ǤJournal of statistical software ͳȋͷȌǣͳ–ʹͷǤ
References ȏʹȐ Ú ǡ ȋʹͲͳͺȌ ǤarXiv preprint
arXiv:1810.00240Ǥ
ȏͳȐ ǡǡ ǡ ȋʹͲͳͷȌ
ͶǤͲǤZukunft der Arbeit in
Industrie 4.0Ǥǡǡ ǡǤͻͻ–ͳͲͻǤ
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Available online at www.sciencedirect.com
ScienceDirect
Procedia CIRP 93 (2020) 944–948
www.elsevier.com/locate/procedia
Abstract
Little attention in the Lean literature has focused on disruptive events and change. However, there is emerging an awareness of the insufficiencies
built into this line of thinking and the way it is conducted in practice. The purpose of this paper is to highlight how underdeveloped Lean thinking
is to meet disruptive events or Kodak moments. This case study illustrates how strong commitment to Lean thinking not necessarily helps in
critical situations. Following a path turning lean from the initiative top-down to a bottom-up approach, including softer components as knowledge,
reflection, culture and involvement. While compensated the situation to some extent, but in general, the lean tool-box was insufficient to align
the organization to change. Missing in the change initiative was time for reflection and discussion about what to reflect upon, central part of
Hensai. The paper gives insights into how a company with traditional Lean implementation struggles with aligning change with market
development. The solar panel industry serve as case study. 17 in-depth interviews have been conducted with people directly involved in the cases.
© 2020 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 53rd CIRP Conference on Manufacturing Systems
Keywords: Lean; Disruptive change; continuous improvement; Hansei; Kaizen
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Halvor Holtskog et al. / Procedia CIRP 93 (2020) 944–948 945
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
946 Halvor Holtskog et al. / Procedia CIRP 93 (2020) 944–948
competition was natural in a market that had seen 40-50 % contradiction between continuous improvement vs radical
yearly growth rates. Higher global volumes of silicon wafers innovation.
and lower prices was one thing, but this increased volume also
came with higher quality standards. TEC at that time had 5. Discussions
limited control of their downstream value chain and thus lacked
feedback on product quality. The mantra had been to deliver as The case illustrates a typical situation where companies
much quantity, as possible, assuming that every unit produced experience good profitability over longer periods, where the
was 100% in accordance to customer requirements. As a market pays well for any product offered by the seller. As
response TEC restructured their factories, management was noted, Lean as production philosophy was emphasized by top
replaced, and quality and performance departments were management from the early beginning. Nevertheless, not as an
revitalized. integrated and holistic system well understood and embraced
A comprehensive business system, based on Lean principles by the employees, rather a control regime failing to achieve the
and proven in other divisions of the owner's conglomerate of intentional smooth, zero defect and cost-effective one-piece-
companies, was implemented in 2010. This business system flow production. Thus, the case company went on with their
equalized the importance of both culture and structure, mass production, earning satisfactory earnings for the
meaning that all employees need understanding and knowledge shareholders, and an obvious significant improvement
on when and how to utilize methods and tools in the Lean potential.
toolbox. Thereby, making it possible to do problem solving at The top down approach is in contrast to the mainstream
the lowest level in operations. Within a year after introducing Lean literature [4, 19, 20], where support from the top is
the business system near all employees had been through a essential but it must be driven as a more bottom up approach.
learning program for continuous improvement. A team Similar argument was offered by March and Simon [16] when
structure was implemented to distribute roles and they tried to explain the difference between OL1 and OL2. If
responsibilities and to address problems at root-cause. you just attack the symptoms to the problem and use the
Identified problems and challenges were rated according to an problem solving as a control measure, it will lead to OL1 and,
improvement hierarchy, rating from the simplest state of "just sometimes, defensive actions. In other words, the
do it" to the formation of temporary and cross-discipline organizational learning suffers. At TEC, we did not find any
improvement teams aiming to solve complex problems by evidence of defensive action, but the problem solving was not
gathering facts, analyzing it by advanced statistical tools and employee driven either. On the positive side, TEC had a strong
discuss results and actions across team and value chain cultural feeling of “being in the same boat”. Employees wanted
boundaries. Successful results of improvement projects in one to learn in order to do a better job and to secure their workplace.
factory was rapidly distributed to the other factories, a This driving force was one of the main contributors for not
transformation made available by the new management and having defensive actions, which could damage their company
factory structure. The efforts gave results in the period from survival.
2010-2011 in terms of improved quality (wafer yield increased The learning and implementation of Lean tools for
from 83-95% - making TEC better than global average), cost continuous improvement and problem solving seemed to get
(cost per unit decreased by 50%), and HES (absence due to job total focus in the entire company. Battling the lowering prices,
related issues reduced by 88%). However, global prices on the attention of the leaders seemed to become more on-sided
silicon wafers continued downward at a higher phase than TEC and taking important things for granted. It meant that
was able to compensate for with the Lean business system. organizational learning was the issue to follow up, without time
This case illustrates a typical situation where companies to reflect and delegation of authority.
experience good times over longer periods, where the market Operators continuous to improve operations and saved costs
pays well for any product offered by the seller. As noted, Lean without the knowledge of the development of the market
as production philosophy was emphasized by top management prices. Leaders forgot to ask questions about is Lean, in the way
from the early beginning. However, not as an integrated and they have implemented it, the right way forward. Will
holistic system well understood and embraced by the continuous improvement and organizational learning be able to
employees, rather a control regime failing to achieve the deliver enough cost saving and efficiency to compensate the
intentional smooth, zero defect and cost-effective one-piece- down tumbling market prices? For the researchers, it seems to
flow production. Thus, TEC went on with their mass be not the case. The leaders were determined to stick with Lean
production, earning satisfactory earnings for the shareholders, and shave cost. Investments in newer technology and more
and an obvious significant improvement potential. A potential automation of operations were, of course considered, but
unleashed too late to make TEC prepared for the market having relatively new technology in place and little time and
uncertainties to come. When the crisis became present, Lean as capital to invest further this was not realistic options in the short
a holistic business philosophy seemed easy to "sell-in". The time perspective.
unsolved question is whether Lean could have been the savior, The leaders promoted a showcase for Lean thinking and,
or at least postponed the close-down by years, without a certain perhaps, felt obliged to follow this route. This impression
degree of urgency. Which leads to the dilemma or constructed manifested itself when the CEO, after the closing of the factory,
talked about how they successfully implemented Lean. In a
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Halvor Holtskog et al. / Procedia CIRP 93 (2020) 944–948 947
presentation for other companies, he proudly presented how he 3. Harrison, B., Lean and mean: the changing
used to local media to motivate operators through positive landscape of corporate power in the age of
publicity. flexibility. 1994, New York: Basic Books. XI, 324 s.
Going back to the interviews the impression of not fully 4. Liker, J., The Toyota Way; 14 Management
understand how to manifest responsibility in the lower ranks Principles From The World's Greatest Manufacturer.
was obvious in the first stage of the lean journey. People were 2004, New York: McGraw-Hill.
waiting for orders or doing continuous improvement in the 5. Netland, T.H. and D.J. Powell, The Routledge
areas where the leadership measured. Little to no time was set Companion to Lean Management. 2017, New York:
out to self-reflection upon the direction where the company and Routledge.
each department were heading. Overall, the bigger picture was 6. Klein, J.A., The Human Cost of Manufacturing
Reform. Harvard Business Review, 1989. March-
missing.
April: p. 60 - 66.
The contribution of this paper is to reflect upon, and discuss,
7. Fujimoto, T., Capability Building and Over-
the possible shortcomings of lean in a crisis situation. The way
Adaptation: A Case of 'Fat Design' in the Japanese
Toyota describes the value of lean is that continuous Auto Industry, in Coping with Variety; Flexible
improvement of products, processes and people over time productive systems for product variety in the auto
smoothen out negative effects of unexpected events. This long industry, Y. Lung, J.-J. Chanaron, and D. Raff,
term business philosophy integrates all functions of a company Editors. 1999, Ashgate Publising Limited:
from operations, product development, technology integration Hampshire. p. 261 - 286.
and other support functions. Thus, partly and recently 8. Fujimoto, T., The Evolution of a Manufacturing
implemented lean principles will less probably withstand major System at Toyota. 1999, New York: Oxford
crisis as discussed in our case. A more holistic lean perspective University Press.
in addition to a larger degree of people involved in 9. Berggren, C., The Volvo experience: alternatives to
understanding the context, implementing and follow-up on lean lean production in the Swedish auto industry. 1993,
principles, and reflective actions upon prioritizing activities Houndmills: Macmillan. XIII, 286 s.
leading towards targets. It is hard to generalize from a one- 10. Berggren, C., NUMMI vs. Uddevalla: a Rejoinder.
case-study, but reflective actions as seem from the literature Sloan Management Review, 1994. 35(2): p. 37 - 49.
may be underestimated as a contributing factor towards 11. Green, W.C. and E.J. Yanarella, eds. North American
achieving OL 2. Auto Unions In Crisis; Lean Production as
Contested Terrain. 1996, State University of New
6. Conclusion York Press: Albany.
12. Rinehart, J., C. Huxley, and D. Robertson, Just
The two elements, time for reflection and what to reflect another car factory?: lean production and its
upon, seem to be two essential key parts in Lean or OL. Setting discontents. 1997, Ithaca: Cornell University Press.
XI, 249 s.
the boundaries for reflection and improvement to narrow can
13. Johnsen, H.C.G., H. Holtskog, and J.R. Ennals, eds.
have severe consequences in times of disruptive change.
Coping with the Future - rethinking assumptions for
Having a top-down approach enforced the misleading efforts in
society, business and work. 2018, Routledge:
Lean. Top management failed to ask the right questions and London.
truly engage the operators in Hansei. This had fatal 14. Ringen, G. and H. Holtskog, How enablers for lean
consequences when the market price dropped. The cost saving product development motivate engineers.
and increased efficiency were not enough and hardly the right International Journal of Computer Integrated
thing to do. Manufacturing, 2011: p. 1-11.
15. Ringen, G., E. Lodgaard, and C. Langeland, How to
Acknowledgements Succeed With Continuous Improvement in a Product
Development Environment, in NordPLM'09, 2nd
We wish to thank the Norwegian Research Council for Nordic Conference on Product Lifecycle
financial support of this project. In addition, the other Management. 2008.
researchers that participated in the four year project to find how 16. March, J., et al., Organizations. 1994. p. 786-787.
Norwegian companies implement Lean in their daily 17. Strauss, A. and J. Corbin, Basics of Qualitative
operations. Research: Techniques and Procedures for
Developing Grounded Theory. 2 ed. 1998, Thousand
References Oaks: Sage Publications.
18. Bygdås, A. and E. Falkum, Lean i solnedgang. AFI
1. Davenport, T.H., The Fad That Forgot People, in rapport 2012. 18.
Fastcompany. 1995. 19. Morgan, J. and J.K. Liker, The Toyota Product
2. Steward, P., et al., We sell our time no more; Development System, Integrating People, Process
Worker's struggles against Lean Production in the and Technology. 2006, New York: Productivity
British Car Industry. 2009, London: Pluto Press. Press.
20. Womack, J.P., D.T. Jones, and D. Roos, The
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
948 Halvor Holtskog et al. / Procedia CIRP 93 (2020) 944–948
machine that changed the world : based on the 1990, New York: Rawson Associates. viii, 323 p.
Massachusetts Institute of Technology 5-million
dollar 5-year study on the future of the automobile.
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Available online at www.sciencedirect.com
ScienceDirect
Procedia CIRP 93 (2020) 1085–1090
www.elsevier.com/locate/procedia
Abstract
The next generation of manufacturing system calls for feasible solutions with high efficiency and flexibility. Thus in recent years, Human-
Robot Collaboration (HRC) research attracts many attentions worldwide since it unites the repeatability and accuracy of robots and the
adaptivity and intelligence of human operators. In this paper, the system architecture of an HRC solution is presented, together with the novel
method of vision sensing and controlling towards a smart solution with high accuracy and fast speed. The proposed method is demonstrated
through implementations in the real robot cell, and evaluated by quantifiable measurements. Future research outlooks are addressed at the end
of the paper.
© 2020 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 53rd CIRP Conference on Manufacturing Systems
Keywords: Human-Robot Collaboration; System Design; Smart Manufacturing; Cyber-Physical System; CPS
1. Introduction with the industrial robots at the same time in the same space.
The vision-based approach is popular to identify the operation
Nowadays, different initiatives are proposed worldwide objects as the first step. A bin-picking system was proposed
towards the next generation of the manufacturing industry. based on the point cloud generated by the RealSense camera
Despite the diverse terminologies and focuses, it is a consensus [2]. The 3D objects are segmented from to guide the industrial
that human should not be replaced by automation and computer robot towards accurate object-picking tasks. Similarly, the 3D
systems. Instead, the new technologies shall work as assistance sensor is utilised to guide the robot grinding operation [3]. The
and supports for the human to provide a more efficient, safe depth sensor is utilised for localisation and the 2D camera for
and friendly working environment. Among the latest profile scanning. Eventually, the grinding process can be
manufacturing technologies, Human-Robot Collaboration guided by the vision-based solution automatically.
(HRC) offers the opportunity of integrating the accuracy of To monitor the robotic environment, a framework was
robot and flexibility of human [1]. Thus in this paper, the recent presented which realised separation distance monitoring
HRC research is reviewed and discussed. Then a novel HRC between a robot and a human operator in a transparent and
system is proposed towards higher accuracy and faster tuneable fashion [4]. The separation distance is assessed
response speed. The proposed work is then validated through pairwise for key points on the robot and the human body and as
implementations and evaluated via quantifiable measurements. such can be selectively modified to account for specific
conditions. Aalerud et al. [5] proposed system architecture for
2. Literature Review mapping and real-time monitoring of a relatively large
industrial robotic environment. Multiple static sensor nodes are
In recent years, the HRC research is taken globally as it placed on the ceiling to generate the point clouds reflecting the
offers a promising approach allowing the operators to work dynamics in the robot cell. In an even bigger scale, an
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
1086 Xi Vincent Wang et al. / Procedia CIRP 93 (2020) 1085–1090
autonomous scanning approach was presented which allowed framework for real-time safe human-robot collaboration, using
multiple robots to perform collaborative scanning for dense 3D static hand gestures and 3D skeleton extraction. OpenPose
reconstruction of unknown indoor scenes [6]. The method can library is integrated with Microsoft Kinect V2, to obtain a 3D
conduct paths for several robots, allowing them to efficiently estimation of the human skeleton.
coordinate with each other such that the collective scanning At the collaboration phase, A dynamic robot task selection
coverage and reconstruction quality is maximised while the framework was proposed in human-robot collaborative
overall scanning effort is minimised. contexts through a voxel-based collision avoidance system
After the robotic environment is recognised, the robot needs [15]. In this research, the shared workspace is monitored by 3D
to be controlled according to the plan. Jeppesen et al. [7] point-cloud sensors. Wang et al. [16] introduced real-time
developed a lightweight HRC structure based on the RealSense active collision avoidance in an augmented environment. In this
Robotic Development Kit. The field-programmable gate array work, virtual 3D models of robots and real camera images of
is utilised for robot control and real-time tasks, e.g. following operators are integrated for monitoring and collision detection
human’s hand. A similar approach is also introduced to drive [17]. The industrial robot controllers are further linked for
the standard industrial robot, e.g. ABB IRB140 [8]. The pre- adaptive robot control, without the need for programming by
defined robot model is then matched to each robot link, based the operators [18]. An HRC system for teaching assembly was
on the depth image captured by the depth sensor. In this way, developed by Haage et al. [19]. The human demonstrates the
the robot posture can be monitored and estimated without any assembly process, and the RealSense RGBD sensor captures
markers or sensors on the robot. Instead of controlling the robot the key frames, which are converted into semantic graphs and
locally, a 3D augmented reality navigation system using assembly program later to drive the industrial robot.
stereoscopic images was developed based on a remote robot During the practice of the HRC system, safety is one of the
operating system [9]. The accurate matching between the most critical aspects. Maurtua et al. [20] measured the trust of
simulated model and the video image of the actual robot can be workers on fenceless human-robot collaboration in industrial
realised, which helps the operator to accomplish the remote robotic applications as well as to gauge the acceptance of
control task correctly and reliably. different interaction mechanisms between robots and human
During the robot manipulation, gesture recognition is an beings. In their finding, most of the participants declare that in
efficient method to communicate with the robot. Vysoky et al. the future the collaboration between robots and workers will be
[10] proposed a shared operator-robot workspace. The position possible and that they will accept the collaborative tasks.
of hands and gestures are detected, which leads to instant Besides, gesture-based interactions and hand-guiding
reactions of the robot on the presence of the operator to interaction mechanisms are also rated as promising approaches
eventually control the robot. An intuitive robot teaching method in the future.
was developed based on the hand-guided demonstration [11]. A
depth camera is used to recognise various hand gestures which 3. Proposed System
are used as the commands to control the robot gripper. Liu and
Wang [12, 13] proposed an overall model of gesture recognition Despite significant development in the vision-based HRC
for human-robot collaboration. Four essential technical system, there is still a lack of an efficient approach to perform
components in the model of gesture recognition are identified, accurate and fast interaction with the robot. Thus in this
i.e. sensor technologies, gesture identification, gesture tracking research, a novel system architecture is proposed, as illustrated
and gesture classification. Mazhar et al. [14] designed a in Fig.1. A vision sensor is utilised, which is equipped with an
Depth Sensor Image Processing Search Module Collision ROS Industrial Robot
Module Avoidance Module
Ȁ
Visualisation
Module
Object
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Xi Vincent Wang et al. / Procedia CIRP 93 (2020) 1085–1090 1087
RGB camera and a depth camera. The RGB camera captures the different parts of the data sequentially until the visualisation
the colour video frames like normal video cameras, while the is completed.
depth camera has a built-in scanner which senses the depth of From computing’s perspective, the system consists of an
the shooting scene. As a result, the video frames are sent to the independent initialisation process and three threads that can
Image Processing Module as the output. realise video pre-processing, nearest point identification and
data visualisation at the same time. The system starts when the
3.1. Image Processing Module background frames are captured by the depth sensor and
transmitted to the processing module. Then the background
The Image Processing Module is responsible for pre- point cloud is continuously extracted and processed to apply the
processing the video frames from the cameras. More detailed temporal filter until the background is successfully set. The
workflow of the Image Processing Module can be found in Fig. initialisation stage is finished. While the background data is
2. To be more specific, the colour and the depth video of the sent to the Search Module, the Image Processing Module keep
robot cell are first captured by the two cameras and divided into acquiring the new video frames that have the target object in the
colour and depth frames that are transmitted to the Image experiment scene. The video processing thread in the Image
Processing Module. In the Processing Module, the point clouds Processing Module fetches the frames and spatially aligns all
are extracted from the video frames and smoothed by the the streams in the frames to the depth viewport to get the
temporal filter. The background point cloud is processed to set aligned frames, then smooths the depth frames by applying
the background of the experiment environment. Then the temporal filtering. The temporal filter is one kind of domain-
processed background point cloud and other depth frames are transform filters that can reduce the depth noise by calculating
delivered to the Search Module for further analysis. At the same multiple frames and the one-dimensional exponential moving
time, an alignment method is developed to merge and align the average.
colour and the depth frames into one frame. Subsequently, the
aligned frames are sent to the visualisation module to generate 3.2. Search Module
visible images.
After the depth frames reach the Search Module, the The Search Module is in charge of identifying the nearest
foreground point cloud is extracted by the foreground points between the robot and the obstacle object. After the
extraction algorithm, and processed by the nearest neighbour completion of the Image Processing Module, the filtered depth
search algorithms to compute the position information of the frames are delivered into the Search Module. More detailed
nearest point. Then the module also sends the result to the last workflow of the Search Module is shown in Fig. 3. The closest-
software module, visualisation module. After all the data is point thread controls how the nearest point is identified in the
acquired, the visualisation module starts to render and visualise frame. First, the thread fetches the depth frames and extracts
their point cloud. Then it differentiates the extracted point cloud
with the background point cloud to acquire the foreground point
cloud. It needs to be highlighted that in the proposed work,
different processing algorithms, i.e. K-D tree, traversal, and
Image
Processing enumeration algorithms, are deployed in the Search Module to
Module identify the proper method in the manufacturing scenario.
Search Module
ǫ
Ǧ
Search
Visualisation
Module
Module
Fig. 2. Workflow of the Image Processing Module. Fig. 3. Workflow of the Search Module.
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
1088 Xi Vincent Wang et al. / Procedia CIRP 93 (2020) 1085–1090
In the K-D Tree approach, the foreground point cloud is used new trajectory plan is generated based on the latest collision
to build the search tree. After the tree is fully generated, the state from the previous modules and sent to the virtual robot in
position information of the robot arm is input into the tree. By the Robot Operating System (ROS).
the query algorithm of the tree, the nearest point to the robot In ROS, the actual industrial robot’s states are first acquired.
arm is found. The position information of the nearest point is It is both synchronised with the virtual robot which reflecting
then sent to the main thread in the visualisation module for the physical environment and sent to the Search Module to
video rendering. It needs to be noted that the proposed work is support the closed distance as mentioned before. Thus it forms
not designed to detect specific types of object. Instead, the a closed-loop control mechanism in the proposed system. The
reported mechanism aims to detect any item that is close to the trajectory planning function receives the position of the
industrial robot because in the industrial robot cell, there is high obstruction and robot, and generate a new route when
uncertainty in the moving objects and what approaches the necessary. Eventually, the new route is sent to the virtual and
industrial robot is unpredictable. For instance, an operator industrial robot. The module also monitors the dynamics of the
might hold a tool or box that is potentially colliding with the robot and collision detection results continuously, to be
robot. Hence, the proposed work presented a universal prepared for the next re-planning.
mechanism to detect any type of collision with the robot, to
maximise the safety in the HRC cell. 4. Implementation
3.3. Visualisation Module To validate and evaluate the proposed HRC structure, the
HRC system is implemented in the real robot cell. Intel Real
The main task of the visualisation module is to visualise the Sense D435 Depth Camera is utilised as the sensor to acquire
processed data. Three main visualisation tasks are undertaken depth and colour frames for further processing and recognition.
by the main thread, i.e. the aligned frames, object and robot and The source code package used in this research is Intel
their position information. Several OpenGL libraries are used RealSense SDK 2.0, which includes integrated interfaces for
for rendering. The workflow of the visualisation is more the D435 and OpenGL interfaces for visualisation. The original
straightforward (Fig. 4). The main thread firstly fetches the Image Processing, Search, and Visualisation modules are
aligned frames from the Processing Module and the position developed based on the architecture presented above. ROS is
information from the robot controller. Then the colourised utilised as the main robot control system. Moveit! and Octomap
depth image is rendered and then the colour frame. Eventually, are the fundamental software packages utilised in ROS.
the robot arm calculated closed point, and the distance is Moveit! provides the low-level planning function in fast speed,
rendered and displayed in the visualisation module as a part of and Octomap is the visualisation environment which also
the graphical user interface. More detailed results can be found responds in short intervals. During the implementation, the
in the implementation section. Universal Robot 5 (UR5) utilised as the physical robot and
connected to ROS.
3.4. Collision Avoidance and Robot Control The image processing and collision detection results are
shown in Fig. 5. The Fig.5.a. shows the original robotic
As shown in Fig. 1, the closest distance between the human environment frame, and Fig. 5.b. shows the results after the
and robot is sent to the Collision Avoidance Module as a frame is filtered by the Image Processing Module. The missing
dynamic vector. The collision detection function decides points in the frame are recovered by the algorithm and
whether it is necessary to re-plan the robot trajectory. If so, the highlighted in different colours while the frame quality is
higher. Fig. 5.c. presents the collision detection results without
Visualisation the Image Processing Module and Fig. 5.d. shows the version
Module that is filtered. The boundary of the human is clearer in the
result and the system successfully detects the closed point
between the human and the robot continuously.
To evaluate the performance of the developed system,
quantifiable measurement is also taken focusing on the
response speed. The detailed data can be found in table 1. As
for the outcome of the object recognition, the average
initialisation time (include background setting) is 5.813s. The
average refresh rate of the result is 10.69Hz when no nearest
neighbour search algorithm (enumeration) is applied. When the
K-D Tree algorithm is applied, the performance (10.74Hz) is
ǡ similar to the enumeration (11.06Hz) when the number of the
target points is below 100. As the number of the target points
increases, the refresh rate decreases to 2.46Hz when around
1000 points are included and becomes less than 0.1Hz when the
magnitude reached to ten thousand. The result shows that K-D
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
Xi Vincent Wang et al. / Procedia CIRP 93 (2020) 1085–1090 1089
(a) (b)
(c) (d)
Fig. 5. (a) environment image without Image Processing Module, (b) environment image with Image Processing Module, (c) collision detection without
Image Processing Module, and (d) collision detection without Image Processing Module.
Tree algorithm has better performance when the ratio of the drops to 3.84HZ with a similar level of detection accuracy
data scale and the dimension of the data is low, but when the (12cm).
scale is much larger than the dimension, the calculation speed
is highly affected. Table 2. Performance evaluation on accuracy.
Search algorithm Refresh rate (Hz) Average distance error (m)
Table 1. System performance evaluation. Enumeration 10.88 0.11
System Response Time K-D Tree 0.08 -
h=1 11.49 0.54
System Initiation 5.813s
h=2 11.15 0.32
Based on the traversal algorithm 10.69Hz h=3 11.26 0.19
Based on the K-D Tree algorithm 10.74Hz Octree h=4 11.2 0.12
Based on the enumeration algorithm 11.06Hz h=5 11.09 0.14
h=6 9.61 0.09
Another crucial point is the performance of temporal
h=7 3.84 0.12
filtering. The average refresh rate with the temporal filtering
using enumeration is 10.72Hz when the target points are around
5. Discussions and conclusions
10000. The instability value is 15, while for the result without
the filter, the refresh rate is 12.93Hz and the instability value is
The modern manufacturing system calls for a new
28. The instability value is the times when the recognition result
generation of solutions that are smart, flexible, and efficient,
gets significantly affected by the noise in 60 seconds.
while the human still needs to be the centre of the system. The
Regarding the system accuracy, different algorithms are also
working environment needs to be friendly and safe for human
compared regarding the refresh speed and distance error, which
operators. The HRC offers the opportunity to integrate the
is the difference between the actual distance between the human
flexibility of human and accuracy of industrial robots. In this
and robot against the one detected in the system (Table 2). The
paper, a novel HRC system architecture is proposed. The
Enumeration method provides an acceptable response speed
scientific contribution includes the closed-loop system structure
and distance error, i.e. 11cm. The K-D tree method is not
towards accurate HRC processes. Moreover, multiple collision
capable of handling the big amount of points with a reasonable
detection algorithms are deployed to identify the suitable
speed. The performance of the Octree method heavily relies on
solution for specific HRC tasks. The proposed system is
the layers constructed in the method. The more layers are
validated through implementations in the real robot cell, and the
utilised, the higher detection accuracy is achieved, while the
performance is evaluated based on quantifiable assessments. In
response speed decreases at the same time due to the longer
the future, the proposed work can be analysed via industrial
time required for computing. The 6-layer tree structure provides
tasks with more complex objects and trajectories. Additionally,
the best accuracy during the experiment, i.e. 9cm. When the
the safety and ergonomic metrics can be also evaluated for
layers are added to 7, the response speed significantly reduced,
better user-friendliness of the proposed work.
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
1090 Xi Vincent Wang et al. / Procedia CIRP 93 (2020) 1085–1090
This is a resupply of March 2023 as the template used in the publication of the original article contained errors. The content of the article has remained unaffected.
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 12, DECEMBER 2019 ISSN 2277-8616
Abstract: Automation plays an important role in industrial and domestic applications. The purpose of automation is to get higher production rates,
efficient utilization of materials, better quality, reduce time consumption and to improve safety. In present days, automation is not only implemented in
industrial problems but also in domestic applications. This paper describes the application of automation in one of the work in kitchen called onion
peeling and cutting. Onion peeling and cutting is one of the fatigues and time consuming job for preparation of food and it is everyday activity in all food
processing system. In hotels, hostel mess and restaurants high man power is required for the peeling and cutting the onion and also it consumes more
time. On the other hand there is a shortage of workers for these types of works. Hence, there is a good opportunity for the implementation of automation
in onion peeling and cutting machine. The objective of this project is to implement the mechatronics approach for automating the onion peeling and
cutting operation.
Index terms : Automation, Onion peeling, Circular drum, AC motor, Punched metal sheet, Cutter, Operating time
—————————— ——————————
The peeled skin is separated by the water and fed to the side journal of innovative research in science engineering and
gap of the circular disc. From the bottom of drum, there is a technology.
hole which is connected by a pipe. By this extent the peeled [11] Muthukrishnan and nanthagobal , 2014, Automated
skin with water is comes out and collected to use as a vegetable cutting machine, 5th International conference
fertilizer. Even without using the water the onion skin is [12] Reuben Donado and Don Martin Fernandez , 2015,
removed by using blower setup to separate the onion skin. Design and Fabrication and Testing of a Semi-automatic
Green Mango Peeling Machine, 8th IEEE International
7.4 Onion Cutting Conference Humanoid, Nanotechnology, Information
The peeled onion is passed to the hopper by the pathway. The Technology Communication and Control, Environment and
cutter rotates by the belt drive from the motor pulley, which is Management.
used for peeling. And the onion is sliced and collected. When
the onion falls on the hopper, hopper guides the onion to
reach the cutter. The cutter is powered by the motor used for
peeling operation. When the cutter blades present in the setup
slices the onion when it falls on the cutter. It is rotating around
800 revolutions per minute. Various sizes of blades are used
to cut the onion of different sizes for various preparations of
food items. This mechanism slices the onion rather than
chopping it.
8 CONCLUSIONS
The prototype model of Onion peeling and cutting machine is
designed (Figure 2) and fabricated (Figure 3) to reduce
manpower and time. The motor with torque capacity of 5 Nm is
enough to run the circular disc and the cutter for peeling and
cutting the onion respectively. The designed machine will peal
the 2kg of onion in 3 minutes and for both peeling and cutting,
it takes 8 minutes to complete the process. This model
successfully peeled the onion and while cutting, it has 75%
efficiency, but it can be overcome by the changing the cutter
blade to have a desired output. The peeled skin is stored
separately and makes a use as a fertilizer to agriculture. By
this technic of onion peeling and cutting will effectively reduce
the preparation time for onion than the cooking. Overall the
proposed model was able to satisfy the proposed advantage.
REFERENCES
[1] Agbettoye, C.A.S. and Balogun, Design and Performance
evolution of a multicrop slicing machine, United States
Patent,188,240
[2] Adetola, S.O and Darmola, G.A., Development of poultry
de-feathering machine with better efficiency, 2014, ISRO
journal of Mechanical and civil engineering
[3] Balavignesh, J. and Karthikeyan, S., 2016, Automatic
French fries making machine, International conference on
control instrumentation, communication and computational
technology.
[4] El-Ghobashy and Adel Bahnasawy, H., 2012,
Development and evaluation of onion peeling machine,
Research gate , Misr J. Ag. Eng., 29 (2): 663 –682
[5] Hari narayanan and Jagadeesh, 2019, Automatic
Vegetable cutting system,International journal of
innovative and emerging research in engineering.
[6] James Berube, A., 1988, Fruit peeling machine,
P.N.4,738,195,A.N.15,686.
[7] Javier and Aitor Aguirreta, 2012,Cutting Grid, US
2012/0125172 A1
[8] Louis Lazzarini, P.,1974, Peeling machine and method,
United States Patent,188,240.
[9] Mohan, S., Garlic peeling machine, 2015, international
journal of engineering and general science.
[10] Maurya Mohit and Cadmi meet, 2019, Design and
Development Peanut peeler machine , International
2070
IJSTR©2019
www.ijstr.org
Special Issue - 2020 International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
NCETESFT - 2020 Conference Proceedings
Abstract - With the advancement of technology, robots have automated to perform services helpful within the well-being of
gotten more attention of researchers to create lifetime of mankind humans and equipment. Robots of types including medical
easy. The actual project presents the look and development of robots, underwater robots, surveillance robots, demolition
Floor Cleaning Robot using IEEE Standards. Developing robots and other styles of robots those do a multiple jobs. they'll
Bluetooth controlled mobile robot. Scan the obstacles ahead of the clean floors, mow lawns and guard homes and can also help in
robot and to avoid collision when the robot is in motion. Raspberry assisting old and disabled people, perform some surgeries,
Pi3 is that the main component won’t to control the cleaning robot. checking pipes and sites that are highly dangerous to people,
An ultrasonic sensor which transmits the ultrasonic waves from its fight fires and defuse bombs.
sensor head and again receives the echo waves and sends its output
to the Raspberry Pi3. The ultrasonic sensor is connected with the II. LITRATURE SURVEY
servomotor, which helps within the rotation of ultrasonic sensor.
The ultrasonic sensor measures the space between the robot and Floor cleaning robot may be a trending concept in these
therefore the obstacle ahead of it. The Pi3 model will stop the robot recent days. By reviewing different paperwork and techniques
immediately and also the buzzer are actuated. The moping of used several cleaning robots, we've started acting on our
operation will be started or stopped at any point of your time as design of floor cleaning robot which is predicated on Raspberry
per the need. The moping brush is actuated by the DC motor fixed Pi 3 model. The papers surveyed for literature review are as
thereto. Signal to the present motor is fed by the controller. An follows:
LCD displays each and each operation applied by the robot.
Buzzer is an audio signalling device that provides the indication of
Aishwarya Pardeshi et. al, [1] This paper presents the look,
operating status of the robot. developed and fabricated model of programmed cleaner robot.
this type of robot performs automated function with extra
features like choose and place mechanism and dirt container
Keywords - Raspberry Pi 3; Sensors; Buzzer; LCD; DC Motor. with air vacuum mechanism. this type of labour is
straightforward and helpful in betterment of life variety of a
I. INTRODUCTION mankind.
These day’s humans lead a sophisticated life. People within Ajith Thomas et. al, [2] proposed an autonomous robotic for
the cities don't have regular and have long working times. In floor cleaning program. it's able to perform sucking and
such a situation someone will choose time saving methods. Thus cleaning, detection of obstacles, and water spraying.
robots have taken the manual works. For career oriented and job Furthermore, it's also able to add manual method. All hardware
going women it becomes hectic to handle home and office and software functions are manipulated by Raspberry pi3 model.
together. Traditionally floor is cleaned with the assistance of
mop or wet mop using the hand as a possible tool. they need to Vaibhavi Rewatkar and Sachin T. Bagde [3] provided a
clean hard on the surface. The cleaning includes cleaning of comprehensive overview of the technological advantages helped
varied surfaces basically cement floors, highly polished wooden within the real world for the convenience of just about all of the
or marble floors. Among these floors the rough surface floor like people that are extremely busy. Consequently, this has led to
cement floor, mostly present in semi urban areas are covered arriving up with a goal of constructing an automatic home
with such a lot dust which needs longer for cleaning. For saving appliance. The review includes computerized cleaner having
the time the necessity was of House Cleaning Robot, which is components to DC motor operated wheels, the dustbin,
an automatic system that works and cleans on its own without cleansing brush, mop cleansing and obstruction avoiding sensor.
human control/intervention. Autonomous robot for floor A 12V battery is employed for supplying power. Special
cleaning application reduces much time in lifestyle. It performs technique of ULTRAVIOLET germicidal cleaning technology.
sweeping and mopping tasks at a time, it also does obstacle The study has been done keeping in mind economical expense
detection, and also has automatic water spray. Service robots are of product.
getting popular recently these robots operate semi- or fully
Vinod J Thomas et. al, [4] designed a cleaner robot for Zelun L, Zhicheng Huang, [9] designed a cleaning automatic
domestic application. The robotic contains a cleaning module robot predicated on the ultrasonic basics. With the sole chip
which may be used for cleaning. The Robot was created in order microcomputer AT89C52 and ultrasonic detectors the robotic
that it may well be capable of reach almost every space and can do the characteristic of practical impediment avoidance,
corner of any room that it must be as compact as possible. The programmed manage and programmed sweeping. Within the
working robot is handled using an Android phone using cleaning automatic robot, a revolving cylindrical brush is used
Wireless Bluetooth Technology. The robot was created with an before the automated robot and it sweeps garbage in to the
Arduino microcontroller at its core. The microcontroller is dustbin along the way of motion, and a mop is used behind the
complemented with communications modules like Wireless automated robot, and it can sweep the ground when the
Bluetooth motors and dirt Suction System to work accordingly. automated robot is functioning.
Manya Jain et. al, [5] discussed the event of Automatic Floor Rupinder Kaur [10] designed a swabbing automatic robot
Cleaner. The project is often used for domestic and professional which is extremely good for cleaning jobs especially in homes,
purpose to scrub the surface automatically and manually. When Office buildings, Industries where sanitation could be a
it's turned ON, it gulps within the dust particles by moving significant matter. Many research organizations are active in
everywhere the surface (floor or the other area) because it moves locating the most effective results through the unreal
over it. the driving force control mechanism are often wont to intelligence. Certainly, artificial intellect could be a branch of
drive the motors where robot having the ability to manoeuvre technology which makes computers sounds like mind. This
and also the also few sensors are accustomed detect and avoid product will sweep, and mop the bottom area with clean and
the obstacles. this can be often useful in making the approach to other wiping components; and yes it collects the dust particles
life better for humankind. and other small parts in it. Mapping is wont to instruct this small
device. These devices is just too simple to use, very affordable
Abhishek Pandey et. al, [6] reviewed the requirement of a and cleans every nook of the region. Being autonomous, it could
residence Cleaning Automatic robot. For keeping time there’s a add one’s absence.
requirement of programmed system that cleans alone without
person interventions. Also, they considered how precisely to S Monika, K Aruna Manjusha et. Al [11] This research paper
help those that have physical disabilities. Because that they had presents that floor cleaning is worn out a neater way and
to induce this done, they needed a cleaning system that may add efficiently by robot utilizing wireless system. This proposed
accordance from what we are saying, thus supporting a robot saves the time and economy of labour. within the previous
physically someone. research papers like robot household appliance and automatic
floor cleaner robot had some drawbacks like colliding with
Karthick.T et. al, [7] is intended to create up an autonomous objects before of it and this vacuum couldn’t reach to small areas
automatic robot which will move itself without constant human and left those areas unclean and therefore the automatic floor
instruction. The autonomous cleanser robot involves low power cleaner robot collects the dirt but the downside up here is that it
consuming electric components and it can operate at very low doesn't clean the wet floor. Few of the drawbacks during this
power. Electric parts are the controller board Atmega 2560, project paper are overcome. Amit Sharma,
Ultrasonic detectors, transformer IC and motor driver circuit.
Mechanized part is motor unit with gearbox founded. Ultrasonic Akash Choudhary et. al [12] the target of this project is to
detectors will identify obstructions in line with the program form a totally automated hybrid home cleaning robot. Which is
being executed. A 12V, 4.5Ah rechargeable lead acid electrical fully automated and may perform tasks like mopping and
device is that the energy source for this proposed cleaning cleaning of floor. After the testing we discover that it can
automatic robot. perform all tasks fine with none hurdle. We tested our robot on
various parameters like path following, obstacle avoidance,
Manreet Kaur and Preeti Abrol, [8] came up with the navigation, mopping and vacuum mechanism.
working of automatic robot Floor cleaning. This automatic robot
can add any of two methods. All hardware and software III. LITERATURE REVIEW AT GLANCE
functions are handled by AT89S52 microcontroller. This
automatic robot is in a position to perform sweeping and
Sr. Title of Paper Further extension Major
mopping job. RF modules is getting used for cordless No. Contribution
communication between remote (manual method) and automatic “Automatic Pick and Brings flexibility
robot has range of 50m. This robot is given with IR sensor for I. floor Cleaner” place mechanism to do work
obstacle recognition and automates water sprayer pump. Four
motors are being employed, two for cleaning purpose, one for II. “An Advanced Environmental friendly Less time
pump and one for tires. Dual relay circuit is employed to work Mobile Robot for Floor consuming
Cleaning”
the motors one for the pump and another for the cleaner. In III. ”floor cleaning robot” Auto Helps
previous works, there's no use of automated water sprayer and disposal mechanism physically
works only in programmed mode. With the automated mode disabled people
automatic robot controls all the functions itself and alter the road IV. “Automatic work automatically. able to
if just in case there's hurdle detection and moves back again. Floor Cleaner” cover large
With the manual method, the keypad will be accustomed execute floor areas.
V. A Technological Dealing with Saves time,
the expected job and operate automatic robot. In manual method, Survey on some small pieces Helps
RF component is employed to transfer the knowledge between Autonomous of garbage, physically
remote and automatic robot and display the data associated with Home Cleaning Robots such as paper chips, disabled people
the hurdle detection on LCD. the entire circuitry is associated paper & soil block
with 12V electrical device pack.
ACKNOLEDGEMENT
We gratefully thank for the help and co-operation offered by
Dr. Parameshachari BD, professor and head, Dept of
Telecommunication Engineering, and our project guide, and
Management of GSSSIETW, Mysore for providing help and
support to carry out the project.
REFERENCES
[1] P. Aishwarya, S. More, D. Kadam, V.A. Patil, “Automatic Floor
Cleaner”, IJECT vol. 8, 2017.
[2] T. Ajith, M. S. Rohith, J. Febin, J. Cheriyan, R, Mary George, “An
Advanced Mobile Robot for Floor Cleaning”, International Journal of
Advanced Research in Electrical, Electronics and Instrumentation
Engineering, vol. 5, no. 3, 2016.
[3] R. Vaibhavi and S. T. Bagde, “A Review on Design of Automated Floor
Cleaning System”, International Journal on Recent and Innovation Trends
in Computing and Communication, vol. 3, no. 2.
[4] V. J Thomas, B. Xaviour, J. K George, “Cleaner Robot”, International
Journal of Emerging Technology and Advanced Engineering, ISSN 2250-
2459, ISO 9001:2008 Certified Journal, vol. 5, no. 12, 2015.
[5] M. Jain, P. S. Rawat, J. Morbale, “Automatic Floor Cleaner”,
International Research Journal of Engineering and Technology (IRJET),
vol. 4, no. 4 , 2017.
[6] A. Pandey, A. Kaushik, A. K. Jha, G. Kapse, “A Technological Survey on
Autonomous Home Cleaning Robots”, International Journal of Scientific
and Research Publications, vol. 4, no. 4, 2014.
[7] T. Karthick, A. Ravikumar, L. Selvakumar, T. Viknesh, B. Parthiban. and
A. Gopinath, “Simple Autonomous cleaner Robot”, International Journal
of Science, Engineering and Technology Research (IJSETR), vol. 5, no.
3, 2016.
[8] R. Vaibhavi and S. T. Bagde, “A Review on Design of Automated Floor
Cleaning System”, International Journal on Recent and Innovation Trends
in Computing and Communication, vol. 3 no. 2.
[9] J. T. Vinod, B. Xaviour, J. K. George, “Cleaner Robot”, International
Journal of Emerging Technology and Advanced Engineering, vol. 5, no.
12, 2015.
[10] M. Jain, P. S. Rawat, J. Morbale, “Automatic Floor Cleaner”,
International Research Journal of Engineering and Technology (IRJET),
vol. 04, no. 4 , 2017.
[11] A.Pandey, A. Kaushik, A, K. Jha, G. Kapse, “A Technological Survey on
Autonomous Home Cleaning Robots”, International Journal of Scientific
and Research Publications, vol. 4, no. 4, 2014.
[12] T. Karthick, A. Ravikumar, L. Selvakumar, T. Viknesh, B Parthiban. and
A. Gopinath, “Simple Autonomous cleaner Robot”, International Journal
of Science, Engineering and Technology Research (IJSETR), vol. 4, no.
5, 2015.
[13] M. Kaur, P. Abrol, “Design and Development of Floor Cleaner Robot
(Automatic and Manual)”, International Journal of Computer
Applications (0975 – 8887), vol. 97, no. 19, 2014.
[14] [12] Zelun L, Zhicheng Huang, “Design of a type of cleaning robot with
ultrasonic”, Journal of Theoretical and Applied Information Technology
31st January 2013. Vol. 47 No.3, ISSN: 1992-8645
[15] K. Rupinder, “An extremely cost efficient Swabbing Robot”,
International Journal of Engineering and Computer Science ISSN: 2319-
7242, vol. 6, no. 1, 2017.
[16] S Monika, K Aruna Manjusha, S V S Prasad, B.Naresh “Design and
Implementation of Smart Floor Cleaning Robot using Android App”
International Journal of Innovative Technology and Exploring
Engineering (IJITEE) ISSN: 2278-3075, Volume-8 Issue-4S2 March,
2019.
[17] S. Amit, A. Choudhary, A. Gaur, and A. Rajpurohit, “Fully automated
hybrid home cleaning.” International Journal of Engineering
Technologies and Management Research, vol. 5, no. 3, pp. 219-225, 2018.
[18] N. Fathima, A. Ahammed, R. Banu, B.D. Parameshachari, and N.M Naik,
“Optimized neighbor discovery in Internet of Things (IoT),” In Proc. of
International Conference on Electrical, Electronics, Communication,
Computer, and Optimization Techniques (ICEECCOT), pp. 1-5, 2017.
RESEARCH PAPERS
Automatic Drilling Machine Based on PLC
MY SKILLS EDUCATION
Solidworks
WORK EXPERIENCE
English
UTE university
EINTEK COMPANY