Data Structure and Algorithm: Buddhi Raj Gurung
Data Structure and Algorithm: Buddhi Raj Gurung
Data Structure and Algorithm: Buddhi Raj Gurung
Algorithm
Buddhi Raj Gurung
Table of Contents
P1 Create a design specification for data structures explaining the valid operations that
can be carried out on the structures......................................................................5
Introduction:.......................................................................................................................................5
Data Structure.....................................................................................................................................8
Memory Stack:....................................................................................................10
The operations of a memory stack................................................................................................10
LIFO (Last In First Out)...............................................................................................................10
Stack Operations...............................................................................................................................11
Bubble Sort.......................................................................................................................................20
1. Bellman-Ford Algorithms.....................................................................................................26
2. Dijkstra’s Algorithm..............................................................................................................28
P3 Using an imperative definition, specify the abstract data type for a software stack.
............................................................................................................................34
D2 Discuss the view that imperative ADTs are a basis for object orientation and, with
justification, state whether you agree.................................................................36
Class MyT<T>
- In Java 5, generics are included in the concept. "Generics" means parameterized data types.
Parameterized data type is very important in coding because it allows us to create and use a class,
interface, method with many different types of data. With the help of generics, an Linked List object
is created which contains elements of type T. Type <T> is understood as an Object of the Java
runtime. It is a represent of an object will be specified when using.
+ next: My<T> - reference to the next node in the list, or maybe null.
- More information about nodes in Linked List will be provided in section II.1.C
Class MyLinkedListT<T>
- This class is implemented as a Linked List Class from scratch in Java. This program does not use
the stuff in the standard library. The point of constructing this class is to understand how things
work, and understanding linked lists through important steps by steps and understanding more
complex in data structures.
- MyLinkedListT<T> is a list containing MyT<T> node, a collection of node is stored in this class, it
begins with a head and ends with tail.
- This class is implemented as a Stack Class and Queue Class from scratch in Java.
- More information about Stack and Queue will be provided in section II.2 and II.3
Class Product
code: string - the code of the product (this should be unique for the product).
pro_name: string - the name of the product.
quantity: i integer - the number of products with the same code in a shop at beginning of a
day.
saled: integer - the number of products with the same code, which are saled in the day.
Condition - saled ≤ quantity.
Class Customer
code: string - the code of the customer (this should be unique for the customer).
cus_name: string - the name of the customer.
phone: string - The phone number of the customer (must contain digits only).
Class Ordering
- These classes contain main () function to run the application. The research of Searching, Sorting
and Recursion is come from these classes.
Module-order DEFINITIONS
AUTOMATIC TAGS::= BEGIN
Order::= SEQUENCE{
header Order-header items SEQUENCE OF Order-line
}
Order-header::= SEQUENCE{
number Order-number, date Date, client Client, payment Payment method
}
Order-number::= NumberString|SIZE((2))
Date::=NumericString|SIZE((8))-MMDDYYYYY
Client::=SEQUENCE{
END
Data Structure
Data Structure is a concrete implementation of data types. It is possible to analyze the time
complexity and memory of the Data Structure but not from the data type. Data structures can be
implemented in a number of ways and their implementation may vary from language to language.
Data structure is a specific way to organize your data in a computer so that it can be used effectively.
For example, we can store a list of items that have the same data type using a uniform data structure.
At the backbone of every program or piece of software are two entities: data and algorithms.
Algorithm transform data into something a program can effectively use. Besides, data structures are
computer programs that optimize the way that the information management process calculates
information in memory. Data structures often build each other to create new structure and those
structure are applied to a certain data access pattern. Choosing the most efficient data structure for
the job greatly improves algorithms performance, helping to speed up application processing. This
allows computer systems to efficiently manage massive amounts of data within large- scale indexing,
massive databases, and structure data in big data platforms. In other words, data structure would be
that there are basically different ways of sorting data con your computer.
Data structures are divided into two categories, includes, Primitive Data and Non-primitive:
These are supported structures at the machine level, which can be used to create non-primitive
data structures. This is indispensable and pure in form. They have predetermined behavior and
specifications. For example: Integer, Float, Character, and Pointer. However, pointer does not
hold a data value, instead, they keep memory addresses of data values. This is also called the
reference data type.
Non-primitive data structures can be implemented without the original data structure. Although
they are also provided by the system themselves, they are data structures of origin and cannot be
formed without using primitive data structures. Non- primitive data structures are divided into
two types: Simple data structures and Compound data structures.
Memory Stack:
The operations of a memory stack
Stack memory is a memory usage mechanism that allows the system memory to be used as
temporary data storage that behaves as a first-in, last-out buffer. An example of a stack is illustrated
on the below:
Items in a stack are inserted or removed in a linear order and not in any random sequence. As in any
queue or collection that is assembled, the data items in a stack are stored and accessed in a specific
way. In this case, a technique called LIFO (Last In First Out) is used. This involves a series of
insertion and removal operations which we will discuss in the following section.
Stack Operations
The Push Operation
Push operation involves inserting data items into a stack. Let us check our restaurant plate dispenser
in Figure 9. Operation pushes additional plates (data items) into the plate dispenser (stack). The first
plate is pushed down to the bottom of the stack with all subsequent plates in the following order after
it. The first inserted data item is the most inaccessible and is located at the bottom of the stack.
The stack memory is a special kind of memory that is used to store information temporarily, when
we want to read the information correctly in the opposite direction to the recorded memory, without
caring about the organization data addressing. Stack memory is sequential memory with specify
access restrictions that allow writing and reading information to or from only one location in this
memory called the top of the stack. The stack memory acts as a queue where data can be written to
the top of the stack, while changing all the information stored in subsequent locations in a location
according to the depth of the queue. this. Can only read from the stack from the top of the stack.
Reading makes information from the top of the stack removed and a new information is recorded in
its place from the depth of the stack with the displacement of all information stored in one position to
the top. of stack. Then the queue is the last type to enter first. The stack memory works in the same
way as the rifle magazine, in which the introduction of a new bullet box pushes all the remaining
cartridges into the depths of the magazine and the use of cartridges (one shot) makes The top
cartridge is removed and replaced with the first cartridge under the used one. Three instructions are
used to manipulate the stack are push (write data into the stack), read stack (read the top of the
stack), pop (delete data from the top of the stack with a shift of its contents up). Sometimes,
symmetric operations involve the top two positions of the stack used as arguments and positions for
the results. After pure operation, pop operation is performed and results are available at the top of the
stack.
The first stack deployment is based on the use of registers, in which data is moved in a parallel
manner during Push and Pop operations. Modern stack deployment is based on the main memory
usage supported by special registers that make it easy to access and manage the stack. A stack is
reserved by the operating system, sequences of consecutive memory locations. The boundary of the
stack is determined by two special registers - the stack base register and the stack limit register hold
the address of the stack top and lower boundary positions. The address space between these
addresses is excluded from other uses than the stack implementation. The top of the stack is
determined by keeping the address in the third register called the stack pointer register. Before the
computer starts executing the program, the top of the stack register is set to the contents of the stack
base register. After each Push operation, the contents of the stack pointer register are changed
according to the number of bytes corresponding to the stack in the stack limit direction. After each
Pop operation, the content of the stack pointer is changed in the direction of the stack base.
Figure 4 Stack implementation the computer- the top of the stack in the memory.
The figure above shows the organization of the stack in the main memory, in which the top of the
stack is accessed only in memory. Another solution, shown in the figure below, assumes that the top
of the stack and the position just below the top, is stored in special processor registers: the top of the
stack register and the top register -first.
Figure 5 Stack implementation in the computer- the top of the stack in registers..
3. stored in the stack of the program counter's current content (ie, the return address executes
the next instruction after the call) with the "Push" operation,
4. write to the program that accesses the embedded address in the "Call" guide,
5. take the next tutorial according to the new content of the program counter.
The address written to the stack will be used by the "Ret" return command to automatically return
from the subroutine to the next command.
Instructions "Call".
2. Read from the top of the stack of the return address (to successfully instruct the "call"
command),
3. write this address to the program counter,
4. perform the "Pop" operation on the stack,
5. Fetch a new guide according to the new content of the program counter.
The address written to the stack in a subroutine call is sometimes called a trace and the subroutine
call is called a hop with a trace. Often use nested subroutine calls, in which during subroutine
execution, new subroutine is called from its body. The mechanism of returning from trace
subroutines stored in the stack allows to automatically return the next program call context with the
correct return order preserved. The figure below shows actions related to nested subroutine calls
from a top program thread. In the subroutine call 1 (Call 500 is stored at address 100), the return
address 101 is stored in the stack. In the call of subprogram 2 (Call 900 is written at address 600), the
return address 601 is saved in the stack. When returning from sub-program 2, address 601 is taken
from the stack. When returning from subroutine 1, address 101 is taken from the stack.
Enqueue Operation
Else
Increment rear pointer to point next empty space and add data element to
Dequeue Operation
Accessing data from queue is a process of two steps
Access data where font is pointing, increment font pointer to point next available data
element.
Step 3: Return success.
Some more functions are needed to make the above-mentioned queue operations effective. This is:
peek () - Gets the element in front of the queue without removing it.
Example
FIFO is an abbreviation for first in, first out. It is a method for handling data structures where
the first element is processed first and the newest element is processed last.
There is a ticket counter where people come, take tickets and go.
People enter a line (queue) to get to the Ticket Counter in an organized manner.
The person to enter the queue first, will get the ticket first and leave the queue.
The person entering the queue next will get the ticket after the person in front of him
In this way, the person entering the queue last will the tickets last
Therefore, the First person to enter the queue gets the ticket first and the Last person to enter
the queue gets the ticket last.
Step 1: Starting with the first element (index = 0), compare the current element with the next element
in the array.
Step 2: If the current element is larger than the next element in the array, swap it.
Step 3: If the current element is less than the next element, move to the next element. Repeat Step 1.
1st iterations:
6 2 7 3 5 4
As you can see the number on the above the number are not sorted. So we are going to sort it from
ascending to descending.
6 2 7 3 5 4
We can starts with first two element 6 > 2, 2 is small, so swap the value.
2 6 7 3 5 4
7 > 3, 3 is small, so swap the value
2 6 3 7 5 4
After we swap 7 with 3, now we can see that we can still swap the 7. 7 > 5, 5 is small so we can
swap the value
2 6 3 5 7 4
2 6 3 5 4 7
So, this is the results of 1 st iteration which is 2, 6, 3, 5, 4 and 7. We can see the number still not
sorted in ascending order so we have to do the 2nd iteration.
2
nd
iterations:
2 6 3 5 4 7
Now we can start from the second two element which is number 6. 6 > 3, 3 is small so we have to
swap the value.
2 3 6 5 4 7
2 3 5 6 4 7
6 > 4, 4 is small so we have to swap the value
2 3 5 4 6 7
Now, after the 2nd iteration we almost reach it, we can see the number almost in order except for
number 4. So to make sure have the perfect order, we will continue by 3rd iteration.
3
rd
iterations:
2 3 5 4 6 7
5 > 4, 4 is small so we will swap the value
2 3 4 5 6 7
Now, the number in every element is in the correct place. So once we get the perfect ascending order
we will stop now. So, the 3rd iteration the ascending order is 2, 3, 4, 5, 6 and 7. But keep in mind you
still did not sorted the number you have to continue by 4th iteration.
Selection Sort
Selection sort is the simplest conceptual sorting algorithm. This algorithm will begin to find the
smallest element in the array and replace it with the element in the first position, then it will find the
second smallest element and swap it with the element in the second position, and it will continue to
do this until the entire array is sorted. It is called a selection sort because it repeatedly selects the next
smallest element and moves it to the right place. The Selection Sort Algorithm is used to organize the
list of elements in a particular order (Ascending or Descending). The type of selection, the first
element in the list is selected and it is compared repeatedly with all the remaining elements in the
list. If any element is smaller than the selected element (for Ascending order), both are changed so
that the first position is filled with the smallest element in the sort order. Next, we select the element
in the second list and compare it to all the remaining elements in the list. If any element is smaller
than the selected element, then both are changed. This procedure is repeated until the entire list is
compiled.
Buddhi Raj Gurung (HND/ Third Semester Sec-B)
23
Data Structure and Algorithm 2020
Step 1: Select the first element in the list (that is, Elements in the first position in the list).
Step 2: Compare selected elements with all other elements in the list.
Step 3: In each comparison, if any element is found to be smaller than the selected element (for
Ascending order), then both are changed.
Step 4: Repeat the same procedure with the element in the next position in the list until the whole list
is sorted.
15 20 10 30 50 18 5 45
1 iteration:
st
5 20 15 30 50 18 10 45
2
nd
iteration:
Select the second position element in the list, compare it with all other elements in the list and
whenever we found a smaller element than the element at first position then swap those two
elements.
5 10 20 30 50 18 15 45
3
rd
iteration:
Select the third position element in the list, compare it with all other elements in the list and
whenever we found a smaller element than the element at first position then swap those two
elements.
5 10 15 30 50 20 18 45
4
th
iteration:
Select the fourth position element in the list, compare it with all other elements in the list and
whenever we found a smaller element than the element at first position then swap those two
elements.
5 10 15 18 50 30 20 45
5
th
iteration:
Select the fifth position element in the list, compare it with all other elements in the list and
whenever we found a smaller element than the element at first position then swap those two
elements.
5 10 15 18 20 50 30 45
6
th
iteration:
Select the sixth position element in the list, compare it with all other elements in the list and
whenever we found a smaller element than the element at first position then swap those two
elements.
5 10 15 18 20 30 50 45
7
th
iteration:
Select the seventh position element in the list, compare it with all other elements in the list and
whenever we found a smaller element than the element at first position then swap those two
elements.
5 10 15 18 20 30 45 50
Adjacent element is compared Largest element is selected and swapped with the last element
and swapped (in case of ascending order).
In the bubble sort, each element and the adjacent element are compared and swapped as needed. On
the other hand, the selection sort works by selecting the element and replacing the specific element
with the last element. The selected element can be the largest or the smallest depending on the order,
ascending or descending. The complexity of the worst case is the same in both algorithms, namely,
O (n2), but the best complexity is different. Bubble arrays take n time sequences while selection
types use n2 sequences. Bubble sort is a stable algorithm, in contrast, selection sort is unstable. The
selection algorithm is fast and efficient compared to very slow and inefficient bubble sort. The
bubble sort algorithm is considered to be the simplest and inefficient algorithm, but the selection sort
algorithm is more efficient than the bubble sort. Bubble sort also use extra space to store temporary
variables and require more swaps.
The Bellman Ford algorithm works by overestimating the length of the path from the starting peak to
all other vertices. After that, it repeats those estimates by finding new paths shorter than previously
overestimated paths. Like other dynamic programming problems, the algorithm calculates the
shortest path in a bottom-up way. First, it calculates the shortest distance that has at most one edge in
the path. Then it calculates the shortest path with up to 2 edges, etc. After the iteration of the external
loop, the shortest path with at most i edges is calculated. There may be a maximum | V | - 1 edge in
any simple path, that's why the outer loop runs | v | - 1 times. The idea is, assuming that there is no
negative weight cycle, if we have calculated the shortest paths with most edges i, then repeating on
all edges ensures the shortest path to be given the most edges (i + 1). Below are the detailed steps
used in Dijkstra’s algorithm to find the shortest path from a single source vertex to all other vertices
in the given graph. Given a graph and a source src peak in the graph, find the shortest path from src
to all vertices in the given graph. The chart may contain negative weight edges.
Algorithm
Create a set sptSet (shortest path tree set) that keeps track of vertices included in shortest
path tree, i.e., whose minimum distance from source is calculated and finalized. Initially,
this set is empty.
Assign a distance value to all vertices in the input graph. Initialize all distance values as
INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
While sptSet doesn’t include all vertices
Pick a vertex u which is not there in sptSet and has minimum distance value.
Include u to sptSet.
Update distance value of all adjacent vertices of u. To update the distance values,
iterate through all adjacent vertices. For every adjacent vertex v, if sum of distance
value of u (from source) and weight of edge u-v, is less than the distance value of v,
then update the distance value of v. To better understand how it works as well as the
algorithm of the Bellman Ford Algorithm, we will provide you with an example. Set
the source vertex given to 0. Initialize all distances as infinite, except the distance to
the source itself. The total number of vertices in the graph is 5, so all edges must be
processed 4 times.
Let all edges be processed in the following order: (B, E), (D, B), (B, D), (A, B), (A, C), (D, C), ( B,
C), (E, D). We get distance after all edges are processed for the first time. The first row in the
original distance. The second row shows the distance when the edges (B, E), (D, B), (B, D) and (A,
B) are processed. The third row displays the distance when (A, C) is processed. The fourth row
displayed when (D, C), (B, C) and (E, D) are processed.
The second iteration ensures that all the shortest paths are at most 2 edges long. The algorithm
handles all edges 2 more times. The distance is minimized after the second iteration, so the third
iteration and the fourth time do not update the distance.
2. Dijkstra’s Algorithm
Dijkstra's algorithm uses the first width search (not the single-source shortest path algorithm) to
solve a single source problem. It places a constraint on the graph: it cannot have negative weight
edges. However, with one limitation, Dijkstra significantly improved the running time of Bellman.
Dijkstra's algorithm is sometimes also used to solve the shortest path problem of all pairs by simply
running it on all internal vertices. Again, this requires all edge weights to be positive. Dijkstra's
algorithm allows you to calculate the shortest path between a node (you choose which button) and
every other node in the graph.
Djkstra's algorithm works on the basis that all B -> D paths of the shortest path A -> D between
vertices A and D are also the shortest path between vertices B and D.
Djikstra used this property in the opposite direction, that is, we overestimate the distance of each
vertex from the start. Then we visit each of its nodes and neighbors to find the shortest path to those
neighboring nodes. The algorithm uses a greedy approach in the sense that we find the next best
solution in the hope that the end result is the best solution for the whole problem. For instances,
calculate the shortest path between node C and the other nodes in the chart below:
During the execution of the algorithm, we will mark every node with the minimum distance to the C
button (our selected node). For node C, this distance is 0. For the rest of the nodes, since we still
don't know the minimum distance, it starts with infinity (∞):
We will also have an existing node. Initially, we set it to C (our selected button). In the picture, we
mark the current button with a red dot. Now, we check the neighbors of our current node (A, B and
D) in no particular order. Let's start with B. We add the minimum distance of the current node (in
this case 0) with the weight of our current connection node with B (in this case 7) and them I have 0
+ 7 = 7. We compare that value to the minimum distance of B (infinity); the lowest value is the
remaining value is the minimum distance of B (in this case, 7 is smaller than infinity):
As far as possible. Now, check neighbors A. We add 0 (the minimum distance of C, our current
node) with 1 (the weight of our current connected node with A) to get 1 We compare 1 with the
minimum distance of A (infinity) and leave the smallest value:
Then, we checked all neighbors of C. Because of that, we marked it as accessed. Please indicate the
buttons accessed with a green check mark:
Buddhi Raj Gurung (HND/ Third Semester Sec-B)
31
Data Structure and Algorithm 2020
Now we need to select a new existing node. The button must be an unwanted node with the smallest
minimum distance (so the node has the smallest number and no check marks). That is A. Please mark
it with a red dot:
And now we repeat the algorithm. We check our current node's neighbor, ignoring the accessed
buttons. This means we only test B. For B, we add 1 (minimum distance of A, our current node) to 3
(weights of edges A and B) to get 4. We compare 4 with minimum distance of B (7) and leave the
smallest value: 4.
Then we marked A as visited and selected a new existing node. D, which is the non-access node with
the smallest current distance.
We repeat the algorithm again. This time, we checked B and E. For B, we get 2 + 5 = 7. We compare
that value to the minimum distance of B (4) and leave the minimum value (4). For E, we obtain 2 + 7
= 9, compare it with the minimum distance of E (infinity) and leave the smallest (9). We marked D
as accessed and put our current button into B.
So, we just need to check E. 4 + 1 = 5, smaller than the minimum distance of E (9), so we leave 5.
After that, we marked B as accessed and set. E is the current node.
Since no buttons are not provided, we are done! The minimum distance of each node now actually
represents the minimum distance from that node to the C button (the button we chose as the original
button)
utilities or routines that work as a group. See stack, application stack and protocol stack. In addition,
to allow them to work together, the required abstract data types such as Stack (LIFO) and Queue
(FIFO).
So, in order to better understand the software stack, we find out about stack operation on Android
device. For each application running on an Android device, the runtime system will maintain an
active Stack. When an application is executed, the application's first operation is started to be placed
on the stack. When a second operation is started, it is placed on the top of the stack and the previous
activity is pushed down. Operation at the top of the recommended stack is active (or running). When
the operation is exited, it is turned off the stack by the running time and the operation is located just
below it in the stack to become the current activity. Operations at the top of the stack can, for
example, just exit because the stack it is responsible for has been completed. In addition, the user
may have selected the Back on screen button to return to the previous activity, causing the current
activity to be disconnected from the stack by the runtime system and therefore destroyed. A visual
representation of the Android Activity Stack is illustrated in Figure below:
As show in the diagram, new activities are pushed on to the top of the stack when they are started.
The current actiive activity is located at the top of the stack until it is either pushed down the stack
by a new activity, or popped off the stack when it exits or the user navigates to the previous activity.
In the event that resources become constrained, the runtime will kill activities, starting with those at
the bottom of the stack. The Activity Stack is what is referred to in programming terminology as a
Last- In- First- Out (LIFO) stack in the last item to be pushed onto the stack is the first to be popped
off.
methods. Another method is to use the named property. Regardless of our method the goal is to use
data without any damage or alteration.
Advantages of encapsulation
Data encapsulation, sometimes referred to as data hiding, is the mechanism whereby the
implementation details of a class are kept hidden from the user. The user can only perform a
restricted set of operations on the hidden members of the class by executing special functions
commonly called methods. The actions performed by the methods are determined by the designer of
the class, who must be careful not to make the methods either overly flexible or too restrictive. This
idea of hiding the details away from the user and providing a restricted, clearly defined interface is
the underlying theme behind the concept of an abstract data type.[ CITATION Don96 \l 1033 ]
The advantage of using data encapsulation comes when the implementation of the class changes but
the interface remains the same. For example, to create a stack class which can contain integers, the
designer may choose to implement it with an array, which is hidden from the user of the class. The
designer then writes the pus() and pop() methods which puts integers into the array and removes
them from the array respectively. These methods are made accessible to the user. Should an attempt
be made by the user to access the array directly, a compile time error will result. Now, should the
designer decide to change the stack's implementation to a linked list, the array can simply be
replaced with a linked list and the push() and pop() methods rewritten so that they manipulate the
linked list instead of the array. The code which the user has written to manipulate the stack is still
valid because it was not given direct access to the array to begin with.[ CITATION Don96 \l 1033 ]
To sum up, the advantages of data encapsulation and information hiding are:
Protecting an object from unwanted access by client.
Allowing access to a level without revealing the complex details below that level
Reducing human errors.
Simplifies the maintenance of the application
Making the application easier to understand.
Conclusion:
Encapsulation is the first step towards object-oriented programming. Using the pointer and mutator
method we can encapsulate. Another method is to use the named property. The advantage of this is
that your object users can manipulate internal data points using a single named item.
Buddhi Raj Gurung (HND/ Third Semester Sec-B)
37
Data Structure and Algorithm 2020
D2 Discuss the view that imperative ADTs are a basis for object orientation
and, with justification, state whether you agree.
Almost people known that Object-orientation programming based on the concepts of “objects”,
which can contain data, in the form fields (attributes), and code, in the form of procedures. But the
reality when we look at the development process of the development process is that we can see the
emergence of programming structures, firstly linear programming, to structural programming,
abstraction. turn data into object-oriented programming. We see the appearance of fear of data
abstraction appearing before the opposite programming. As everyone knows, structured
programming has a lot of weaknesses, so the birth of object-oriented programming is to overcome
those weaknesses and at the same time incorporate the character of data abstraction. Data into it,
create a complete programming structure. Object-oriented programming is built on the basis of
structured programming and data abstraction. The fundamental change is that an object-oriented
program is designed around data that we can work on, rather than following the function of the
program itself. This is completely natural once we understand that the goal of the program is to
process data. After all, the work that computers perform is often called data processing. Data and
manipulation are linked at a basic level (also called a low level), each of which requires a specific
goal, object-oriented programs to explicitly relationship. this. Object-oriented programming links the
data structure to operations, in a way that all often think about the world around them. We often
associate certain activities with a certain type of activity and put our assumptions on relationships.
Data abstraction acts on data as well as functional abstraction. When data abstraction, data structures,
and elements can be used without bothering with specific details. For example, floating point
numbers have been abstracted in all programming languages. We don't need to care about which
exact binary representation for the number of floating points when assigning a value, nor need know
the abnormality of binary multiplication when multiplying floating point values. It is important that
floating point numbers work properly and understand. In general, an ADT is a type of data that can
be represented in a computer, basically defined on that type, and can be customized with packaging
and hiding information using public keywords, protected and private in class declarations. For
example, a stack can be treated as an object when processed as an array or linked with push () or pop
() customizations and hide information.
From that point on, object-oriented programming achieves abstraction of data using procedural
abstraction, while abstract data types depend on type abstraction. In addition, objects are focused on
functions that create data abstraction, while ADT is organized around operations.
Abstraction: This is the ability of the program to ignore or ignore certain aspects of
information that it is working on directly, which means it has the ability to focus on the core
elements. Each object can complete internal tasks, report, change state and communicate with
other objects without knowing how the object performs the operation. Abstraction is also
expressed by the fact that an initial object may have some common characteristics, such as its
extension but the original object may not be the method to execute.
Encapsulation: This property does not allow users to change the state of the object. Only the
object's internal methods allow it to change its state. Allow the external environment to affect
the internal data of an object in any way completely dependent on the encoder. This is the
assurance of the integrity of the object.
Inheritance: This property allows the object to have available properties that other objects
have inherited. This attribute allows the object to share or expand existing allocations without
specifying.
From these characteristics, object-oriented programming has the basic characteristics of ADTs (data
abstraction), and it also adds polymorphism and inheritance. Therefore, it is said that ADT (abstract)
is the foundation of OOP and the remaining principles derive from it to complement it