0% found this document useful (0 votes)
34 views5 pages

Vision For Ode

The document describes the VisionforCode project, which uses image recognition techniques to interpret hand-drawn sketches and wireframes in order to generate HTML code. It aims to streamline the process of translating designs into code. The resulting code is responsive, adaptable to different screen sizes, and well-structured. VisionforCode has the potential to revolutionize the development workflow by reducing manual coding efforts and bridging communication between designers and developers. The document discusses related work, the proposed system, experiments and results, and concludes that VisionforCode presents an efficient method to generate code from GUI images.

Uploaded by

ganeshakamaxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views5 pages

Vision For Ode

The document describes the VisionforCode project, which uses image recognition techniques to interpret hand-drawn sketches and wireframes in order to generate HTML code. It aims to streamline the process of translating designs into code. The resulting code is responsive, adaptable to different screen sizes, and well-structured. VisionforCode has the potential to revolutionize the development workflow by reducing manual coding efforts and bridging communication between designers and developers. The document discusses related work, the proposed system, experiments and results, and concludes that VisionforCode presents an efficient method to generate code from GUI images.

Uploaded by

ganeshakamaxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Visionforode: Generating HTML Code from Sketch

Designs on paper
Doddi Ganesh, Surisetti Chandra Sekhar, Rongali Shushma,
Hanumalasetty Hemanth Kumar

ABSTRACT:

The VisionforCode project represents an innovative solution that bridges the gap for
design in web and app development. Traditional web and app development processes
often involve a significant amount of time and effort to translate design concepts into
code. VisionforCode aims to streamline this process, making it faster and more
accessible to designers and developers alike.
The project employs advanced image recognition techniques to interpret hand
drawn sketches and wireframes, extracting key design elements, such as buttons, text
fields, and images. The resulting code is responsive, adaptable to different screen
sizes, and well-structured, meeting modern web and app development standards.
VisionforCode holds the potential to revolutionize the development workflow by
reducing the need for manual coding and bridging the communication gap between
designers and developers benefit from cleaner, more consistent code, reducing the
chances of errors and streamlining the development process.

INTRODUCTION:

The Vision for Code project represents an innovative solution that bridges the gap for design
in web and app development. Traditional web and app development processes often involve
a significant amount of time and effort to translate design concepts into code. VisionforCode
aims to streamline this process, making it faster and more accessible to designers and
developers alike.

The project employs advanced image recognition techniques to interpret hand-drawn


sketches and wireframes, extracting key design elements, such as buttons, text fields, and
images. The resulting code is responsive, adaptable to different screen sizes, and well-
structured, meeting modern web and app development standards.

VisionforCode holds the potential to revolutionize the development workflow by reducing


the need for manual coding and bridging the communication gap between designers and
developers. It empowers designers to quickly visualize their ideas, which can be converted
into working prototypes with minimal coding effort. Developers benefit from cleaner, more
consistent code, reducing the chances of errors and streamlining the development process.

This abstract introduces VisionforCode as a game-changing project with the potential to


reshape the way we create web and mobile applications, fostering collaboration between
design and development teams, and accelerating the delivery of innovative digital products.

LITERATURE REVIEW:

To avoid the frustrations occur when desigining that front-end

engineers and designers experience this situation. The need for


more effective resolution

in web page design is needed.

Previous projects are made easy to recognise the

elements created in the mock-up images and encode them

in accordance with the website page. To train,

the deep neural network model, which includes

CNN, is utilized to train

the data sets.

pix2code, an approach based on

Convolutional and Recurrent Neural Networks which

allows to take the computer designs from a single

GUI screenshot as input but their

model was trained seperately on a small dataset before, hence


accuracy will be less. Whereas, Sketch2code used classical

wireframe techniques and deep learning techniques code

was generated by pre-processing but it

also has some better results since input is based on the

camera pictures

From the COCO data, the are many available algorithms which
are free to all. We using model turning to decide which
algorithms is best suits for the data set.

PROPOSED SYSTEM:

Model Tuning : In the context of machine learning, “Model Tuning" can refer to the process of fine-
tuning or adjusting a trained machine learning model to improve its performance on a specific task or
dataset. This typically involves hyperparameter tuning and optimization.
We train with additional algorithms for better accuracy of the output because we adding additional
dataset images. The best is selected.
EfficientDet, RetinaNet, CenterNet, Mask R-CNN etc., like different algorithms available to use.

Google collab is used for training since it is a better suitable environment for Deep Learning models.

We use 14 different type of annotation for UI design. Mostly we concluded all the type of basic type of
annotations. Complex annotation was not made available but corresponding results will be given to
the end user depending on type of nearest match.

A website is designed to which we can attach our hand drawn sketches to get corresponding HTML
codes of those drawn designs.

EXISTING SYSTEMS:
CNN was used in pix2code project, Firstly, the GUI image and their identities, positions, and poses
are taken. Second, the diagrams are identified. Finally, solutions are provided by the latent variables
inferred from scene understanding to generate corresponding diagrams i.e, computer code of the
objects represented by these variables.
Deep learning methods requires a large datasets with hugh number of samples. In dataset contain
trained sketches. Multilayer Perception Network (MPN)is used and it can distinguish data that is not
linearly separable. When MPN is training input and output data pairs are set into MPN and algorithm
updates weight between nurons to model the computation transformation which transform input data
to output.

In UI Code Generation using Deep learning the software elements are Operating systems of
Windows 10 along with Python. Keras is a python built neural network library which is required to
build models in Tensorflow 3.3.0.10. A better cloud environment is required. So, Google collab is
used.

EXPERIMENT AND RESULT:

From the COCO data base, by taking centernet algorithm and a


small sketch is given to recoginse the annotations in the image,
the corresponding results are displayed. the results are by just
fewer steps of training the data.

system recognised annotations the green color checkboxes ,


orange color buttons and purple color text box with the
percentage but few annotations aren't recognised link radio
button as you seen in the below diagram, . training the
algorithm for few more times gives better accuracy

CONCLUSION:
In this paper ,we presented Visionforcode, a easy and efficient method to generate computer code
given a single GUI image as input .While our work demonstrates the system to automate the process
of implementing GUI's by using common design symbols and training modules. Implementing a
now-standard attention mechanism could further improve the quality of the generated code. Finally
we design a technique to evaluate new system translate sketches into code.

You might also like