Vision For Ode
Vision For Ode
Designs on paper
Doddi Ganesh, Surisetti Chandra Sekhar, Rongali Shushma,
Hanumalasetty Hemanth Kumar
ABSTRACT:
The VisionforCode project represents an innovative solution that bridges the gap for
design in web and app development. Traditional web and app development processes
often involve a significant amount of time and effort to translate design concepts into
code. VisionforCode aims to streamline this process, making it faster and more
accessible to designers and developers alike.
The project employs advanced image recognition techniques to interpret hand
drawn sketches and wireframes, extracting key design elements, such as buttons, text
fields, and images. The resulting code is responsive, adaptable to different screen
sizes, and well-structured, meeting modern web and app development standards.
VisionforCode holds the potential to revolutionize the development workflow by
reducing the need for manual coding and bridging the communication gap between
designers and developers benefit from cleaner, more consistent code, reducing the
chances of errors and streamlining the development process.
INTRODUCTION:
The Vision for Code project represents an innovative solution that bridges the gap for design
in web and app development. Traditional web and app development processes often involve
a significant amount of time and effort to translate design concepts into code. VisionforCode
aims to streamline this process, making it faster and more accessible to designers and
developers alike.
LITERATURE REVIEW:
camera pictures
From the COCO data, the are many available algorithms which
are free to all. We using model turning to decide which
algorithms is best suits for the data set.
PROPOSED SYSTEM:
Model Tuning : In the context of machine learning, “Model Tuning" can refer to the process of fine-
tuning or adjusting a trained machine learning model to improve its performance on a specific task or
dataset. This typically involves hyperparameter tuning and optimization.
We train with additional algorithms for better accuracy of the output because we adding additional
dataset images. The best is selected.
EfficientDet, RetinaNet, CenterNet, Mask R-CNN etc., like different algorithms available to use.
Google collab is used for training since it is a better suitable environment for Deep Learning models.
We use 14 different type of annotation for UI design. Mostly we concluded all the type of basic type of
annotations. Complex annotation was not made available but corresponding results will be given to
the end user depending on type of nearest match.
A website is designed to which we can attach our hand drawn sketches to get corresponding HTML
codes of those drawn designs.
EXISTING SYSTEMS:
CNN was used in pix2code project, Firstly, the GUI image and their identities, positions, and poses
are taken. Second, the diagrams are identified. Finally, solutions are provided by the latent variables
inferred from scene understanding to generate corresponding diagrams i.e, computer code of the
objects represented by these variables.
Deep learning methods requires a large datasets with hugh number of samples. In dataset contain
trained sketches. Multilayer Perception Network (MPN)is used and it can distinguish data that is not
linearly separable. When MPN is training input and output data pairs are set into MPN and algorithm
updates weight between nurons to model the computation transformation which transform input data
to output.
In UI Code Generation using Deep learning the software elements are Operating systems of
Windows 10 along with Python. Keras is a python built neural network library which is required to
build models in Tensorflow 3.3.0.10. A better cloud environment is required. So, Google collab is
used.
CONCLUSION:
In this paper ,we presented Visionforcode, a easy and efficient method to generate computer code
given a single GUI image as input .While our work demonstrates the system to automate the process
of implementing GUI's by using common design symbols and training modules. Implementing a
now-standard attention mechanism could further improve the quality of the generated code. Finally
we design a technique to evaluate new system translate sketches into code.