class Database: def __init__(self,db): self.con=sqlite3.connect(db) self.cur=self.con.cursor() sql=””” CREATE TABLE IF NOT EXISTS Customer( id Integer Primary key, name text, mobile text, email text, address text, ) “””
#self.cur.execute (sql) (getting error while executing this line. if removed i have getting empty database output sheet)
self.con.commit()
O=Database(“Customer.db”)
In this code Customer db is getting generated but there is no data
SELECT e.*, d.DepartmentName
FROM Employees e
JOIN Departments d ON e.DepartmentID = d.DepartmentID;
SELECT e.EmpName AS Employee, m.EmpName AS Manager
FROM Employees e
JOIN Employees m ON e.ManagerID = m.EmpID;
INNER JOIN:
Returns only the rows where there is a match between the columns in both tables.
If no match is found, the row is not included in the result.
It’s the most common type of join.
OUTER JOIN:
Returns all rows from one or both tables, even if there is no match in the other table.
LEFT OUTER JOIN (or just LEFT JOIN): Returns all rows from the left table, and the matched rows from the right table. If no match, the result will have NULL values for columns from the right table.
RIGHT OUTER JOIN (or just RIGHT JOIN): Returns all rows from the right table, and the matched rows from the left table. If no match, the result will have NULL values for columns from the left table.
FULL OUTER JOIN: Returns all rows from both tables. If there is no match, the result will have NULL values for the non-matching table’s columns.
GROUP BY:
Groups rows that have the same values in specified columns into summary rows (like finding the total count, sum, average, etc.).
It is typically used with aggregate functions such as COUNT(), SUM(), AVG(), MAX(), MIN().
HAVING:
Used to filter records after the GROUP BY has been applied.
It works similarly to the WHERE clause, but WHERE is used for filtering individual rows before grouping, while HAVING filters the grouped results.
SELECT DeptName, COUNT(*)
FROM Employees
GROUP BY DeptName;
DISTINCT:
Used to remove duplicate rows from the result set based on the specified columns.
If you specify only one column, it will return the distinct values of that column.
If you specify multiple columns, the combination of values in those columns will be considered to determine uniqueness.
SELECT DISTINCT DeptName FROM Employees;
SELECT DISTINCT DeptName, EmpName FROM Employees;
Difference between DELETE and TRUNCATE:
Removes rows one by one and logs each deletion, which can be slower for large datasets.
You can use a WHERE clause to specify which rows to delete.
Can be rolled back if you’re working within a transaction (assuming no COMMIT has been done).
Can fire triggers if there are any triggers defined on the table (for example, BEFORE DELETE or AFTER DELETE triggers).
TRUNCATE:
Removes all rows in the table in one go, without scanning them individually.
Does not support a WHERE clause, so it always deletes all rows.
It’s much faster than DELETE because it doesn’t log individual row deletions (but it does log the deallocation of the table’s data pages).
Cannot be rolled back in most databases (unless in a transaction, depending on the DBMS), and there are no triggers involved.
UNION:
Combines the results of two or more queries.
Removes duplicates: Only unique rows are included in the final result.
It performs a sort operation to eliminate duplicates, which can have a slight performance cost.
UNION ALL:
Also combines the results of two or more queries.
Keeps duplicates: All rows from the queries are included in the final result, even if they are the same.
It doesn’t perform the sort operation, which usually makes it faster than UNION.
SELECT EmpID, EmpName FROM Employees
UNION ALL
SELECT EmpID, EmpName FROM Contractors;
SELECT EmpID, EmpName FROM Employees
UNION
SELECT EmpID, EmpName FROM Contractors;
COALESCE():
First Non null value will be taken, For ex. in select statement, some names are null, that time some default value can be used or another field value. SELECT COALESCE(NULL, ‘Hello’, ‘World’); Output: Hello
SELECT id, COALESCE(name, nickname, ‘Unknown’) AS display_name FROM users;
NULLIF()
NULLIF(expression1, expression2) Returns null if both expressions or column values are equal, else return first the first column value, ie expression1 SELECT NULLIF(10, 10); — Output: NULL SELECT NULLIF(10, 20); — Output: 10 SELECT NULLIF(10, NULL) OR — Output: 10 SELECT NULLIF(NULL, 10) — Output: NULL
IF Condition:
The IF statement is used to check conditions and execute SQL code accordingly.
IF condition THEN
-- Code to execute if the condition is true
ELSIF condition THEN
-- Code block to execute if another condition is true
ELSE
-- Code to execute if the condition is false
END IF;
IF NOT FOUND THEN
RAISE NOTICE 'Employee with ID % not found!', emp_id;
emp_bonus := 0;
END IF;
CASE WHEN:
The CASE WHEN expression is used for conditional logic within a query (similar to IF but more flexible in SQL).
SELECT
name,
salary,
CASE
WHEN salary > 5000 THEN 'High Salary'
WHEN salary BETWEEN 3000 AND 5000 THEN 'Average Salary'
ELSE 'Low Salary'
END AS salary_category
FROM employees;
FOR LOOP:
DECLARE
i INT;
BEGIN
FOR i IN 1..5 LOOP
-- Perform an action for each iteration (e.g., insert or update a record)
INSERT INTO audit_log (action, timestamp)
VALUES ('Employee update', NOW());
END LOOP;
END;
FOR record IN SELECT column1, column2 FROM employees LOOP
-- Code block using record.column1, record.column2
END LOOP;
RAISE – used for printing something (SOP in java)
RAISE NOTICE ‘Employee: %, Salary: %’, emp_name, emp_salary; RAISE EXCEPTION ‘An error occurred: %’, error_message; — This will print and halt the execution. RAISE INFO ‘Employee: %, Salary: %’, emp_name, emp_salary;
Stored Procedures in SQL:
A stored procedure is a reusable block of SQL code that performs specific tasks. It is stored in the database and can be called as needed. Stored procedures are used for:
Modularizing complex SQL logic.
Improving performance by reducing network traffic.
Ensuring code reuse and security (by granting permissions to execute rather than to the tables directly).
Example:
A stored procedure to insert a new employee record:
CREATE PROCEDURE add_employee(emp_name VARCHAR, emp_salary NUMERIC)
LANGUAGE plpgsql AS
$$
BEGIN
INSERT INTO employees (name, salary) VALUES (emp_name, emp_salary);
END;
$$;
Execution:
CALL add_employee(‘John Doe’, 50000);
Functions in SQL:
A SQL function is a reusable block of SQL code that performs specific tasks. It is stored in the database and can be called as needed. It is similar to a procedure but returns a single value or table. Functions are typically used for computations or transformations. Example: A function to calculate the yearly salary:
CREATE FUNCTION calculate_yearly_salary(monthly_salary NUMERIC)
RETURNS NUMERIC
LANGUAGE plpgsql AS
$$
BEGIN
RETURN monthly_salary * 12;
END;
$$;
Execution:
SELECT calculate_yearly_salary(5000); OR EXECUTE calculate_yearly_salary(5000); (If we are using inside a trigger)
Key Differences Between Procedures and Functions:
Return Type:
Function: Always returns a value.
Procedure: Does not return a value.
Usage:
Function: Can be used in SQL queries (e.g., SELECT).
Procedure: Called using CALL, cannot be used in SQL queries.
Transaction Control:
Function: Cannot manage transactions.
Procedure: Can manage transactions (e.g., COMMIT, ROLLBACK).
Side Effects:
Function: Should not have side effects (e.g., modifying data).
Procedure: Can modify data and have side effects.
Calling Mechanism:
Procedure: Called using CALL procedure_name().
Function: Called within SQL expressions, like SELECT function_name().
TRIGGER:
A trigger is a special kind of stored procedure that automatically executes (or “fires”) when certain events occur in the database, such as INSERT, UPDATE, or DELETE. Triggers can be used to enforce business rules, validate data, or maintain audit logs. Key Points:
Types of Triggers:
BEFORE Trigger: Fires before the actual operation (INSERT, UPDATE, DELETE).
AFTER Trigger: Fires after the actual operation.
INSTEAD OF Trigger: Used to override the standard operation, useful in views. (This is in SQL Server only not in postgres)
Trigger Actions: The trigger action can be an operation like logging data, updating related tables, or enforcing data integrity.
Trigger Events: A trigger can be set to fire on certain events, such as when a row is inserted, updated, or deleted.
Trigger Scope: Triggers can be defined to act on either a row (executing once for each affected row) or a statement (executing once for the entire statement).
A trigger can be created to log changes in a Users table whenever a record is updated, or it could prevent deleting a record if certain conditions aren’t met.
Example:
CREATE TRIGGER LogEmployeeAgeUpdate
AFTER UPDATE ON Employees
FOR EACH ROW
BEGIN
IF OLD.Age <> NEW.Age THEN
INSERT INTO EmployeeLogs (EmployeeID, OldAge, NewAge)
VALUES (OLD.EmployeeID, OLD.Age, NEW.Age);
END IF;
END;
Example:
CREATE OR REPLACE FUNCTION prevent_employee_delete()
RETURNS TRIGGER AS
$$
BEGIN
-- Check if the employee is in a protected department (for example, department_id = 10)
IF OLD.department_id = 10 THEN
RAISE EXCEPTION 'Cannot delete employee in department 10';
END IF;
RETURN OLD;
END;
$$
LANGUAGE plpgsql;
-- Attach the function to a trigger
CREATE TRIGGER prevent_employee_delete_trigger
BEFORE DELETE ON Employees
FOR EACH ROW
EXECUTE FUNCTION prevent_employee_delete();
Creates a trigger which is used to log age and related whenever insert, delete, update action on employee rows:
CREATE OR REPLACE FUNCTION log_employee_changes()
RETURNS TRIGGER AS
$$
BEGIN
-- Handle INSERT operation
IF (TG_OP = 'INSERT') THEN
INSERT INTO EmployeeChangeLog (EmployeeID, OperationType, NewAge, ChangeTime)
VALUES (NEW.EmployeeID, 'INSERT', NEW.Age, CURRENT_TIMESTAMP);
RETURN NEW;
-- Handle UPDATE operation
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO EmployeeChangeLog (EmployeeID, OperationType, OldAge, NewAge, ChangeTime)
VALUES (OLD.EmployeeID, 'UPDATE', OLD.Age, NEW.Age, CURRENT_TIMESTAMP);
RETURN NEW;
-- Handle DELETE operation
ELSIF (TG_OP = 'DELETE') THEN
INSERT INTO EmployeeChangeLog (EmployeeID, OperationType, OldAge, ChangeTime)
VALUES (OLD.EmployeeID, 'DELETE', OLD.Age, CURRENT_TIMESTAMP);
RETURN OLD;
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER log_employee_changes_trigger
AFTER INSERT OR UPDATE OR DELETE
ON Employees
FOR EACH ROW
EXECUTE FUNCTION log_employee_changes();
Step 3: Attach the Trigger to the Employees Table
Now that we have the function, we can attach it to the Employees table to log changes. We’ll create a trigger that fires on insert, update, and delete operations.
TG_OP: This is a special variable in PostgreSQL that holds the operation type (either INSERT, UPDATE, or DELETE). NEW and OLD: These are references to the row being inserted or updated (NEW) or the row before it was updated or deleted (OLD). EmployeeChangeLog: This table stores the details of the changes (employee ID, operation type, old and new values, timestamp). – Programmer defined.
What happens when you omit FOR EACH ROW?
Statement-Level Trigger: The trigger will fire once per SQL statement, regardless of how many rows are affected. This means it won’t have access to the individual rows being modified.
For example, if you run an UPDATE statement that affects 10 rows, the trigger will fire once (for the statement) rather than for each of those 10 rows.
No Access to Row-Specific Data: You won’t be able to use OLD or NEW values to capture the individual row’s data. The trigger will just execute as a whole, without row-specific actions.
With FOR EACH ROW: The trigger works on each row affected, and you can track specific changes (e.g., old vs new values).Without FOR EACH ROW: The trigger fires once per statement and doesn’t have access to specific row data.
CREATE TRIGGER LogEmployeeAgeUpdate
AFTER UPDATE ON Employees
BEGIN
-- Perform some operation, but it won't track individual rows.
INSERT INTO AuditLogs (EventDescription)
VALUES ('Employees table updated');
END;
NORMALIZATION:
1st NF:
Each column/attribute should have atomic value or indivisible value, ie only one value.
Rows should not be repeated, ie unique rows, there is not necessary to have PKey here.
2nd NF:
Must fulfill the 1st NF. [cadidate key(composite key to form the uniqueness)]
All non-candidate-key columns should be fully dependent on the each attribute/column of the composite keys to form the cadidate key. For ex. If the DB is in denormalalized form (ie before normalization, all tables and values are together in a single table) and the candidate key is (orderId+ProductId), then the non-key(not part of the candidate key) if you take orderdate, orderedStatus, qty, item_price are not dependent on each part of the candidate key ie it depends only orderId, not ProductId, ProductName are not dependent on Order, like that customer details are not dependent on ProductId. So only related items should be there in a table, so the table is partitioned based on the column values, so that each attribute will depend on its candidate key. So Products goto separate table, orders separate and customers going to separate table.
Primary key is created based for each separated table and ensure that all non-key columns completely dependent on the primary key. Then the foreign key relationships also established to connect all the tablesis not fullly dependent on.
3rd NF:
Must fulfill till 2ndNF.
Remove the transitional dependency (In a decentralized DB, One column value(Order ID) is functionally dependent on another column(Product ID) and OrderId is functionally dependent on the OrderId, so that disturbing one value will affect another row with same column value), so to avoid that separate the table, for Ex. from orders table Sales People’s data is separated.
What is a Transitive Dependency? Let’s break this down with a simple example: StudentID Department HODName S001 IT Dr. Rajan S002 CS Dr. Priya
StudentID → Department (StudentID determines the department). Department → HODName (Department determines the HOD name). It should be like StudentID only should determine HOD, not the dept. HODName depends indirectly on StudentID through Department.
This is a transitive dependency, and we need to remove it.
A transitive dependency means a non-prime attribute (not part of the candidate key) depends indirectly on the primary key through another non-prime attribute.
Query Optimization is the process of improving the performance of a SQL query by reducing the amount of time and resources (like CPU, memory, and I/O) required to execute the query. The goal is to retrieve the desired data as quickly and efficiently as possible.
Important implementation of Query Optimization:
Indexing: Indexes on frequently used columns: As you mentioned, indexing columns that are part of the WHERE, JOIN, or ORDER BY clauses can significantly improve performance. For example, if you’re querying a salary column frequently, indexing it can speed up those queries. Composite indexes: If a query filters by multiple columns, a composite index on those columns might improve performance. For instance, INDEX (first_name, last_name) could be more efficient than two separate indexes on first_name and last_name.
Instead of SELECT * FROM, can use the required columns and use of LIMIT for the required no. of rows.
Optimizing JOIN Operations: Use appropriate join types: For example, avoid OUTER JOIN if INNER JOIN would suffice. Redundant or unnecessary joins increase query complexity and processing time.
Use of EXPLAIN to Analyze Query Plan: Running EXPLAIN before a query allows you to understand how the database is executing it. You can spot areas where indexes are not being used, unnecessary full table scans are happening, or joins are inefficient.
How to Implement Query Optimization:
Use Indexes:
Create indexes on columns that are frequently queried or used in JOIN, WHERE, or ORDER BY clauses. For example, if you frequently query a column like user_id, an index on user_id will speed up lookups. Use multi-column indexes for queries involving multiple columns.
CREATE INDEX idx_user_id ON users(user_id);
2. Rewrite Queries:
Avoid using SELECT * and instead select only the necessary columns.
Break complex queries into simpler ones and use temporary tables or Common Table Expressions (CTEs) if needed.
SELECT name, age FROM users WHERE age > 18;
3. Use Joins Efficiently:
Ensure that you are using the most efficient join type for your query (e.g., prefer INNER JOIN over OUTER JOIN when possible).
Join on indexed columns to speed up the process.
4. Optimize WHERE Clauses:
Make sure conditions in WHERE clauses are selective and reduce the number of rows as early as possible.
Use AND and OR operators appropriately to filter data early in the query.
5. Limit the Number of Rows:
Use the LIMIT clause when dealing with large datasets to fetch only a required subset of data.
Avoid retrieving unnecessary data from the database.
6. Avoid Subqueries When Possible:
Subqueries can be inefficient because they often lead to additional scans of the same data. Use joins instead of subqueries when possible.
If you must use subqueries, try to write them in a way that they don’t perform repeated calculations.
7. Analyze Execution Plans:
Use EXPLAIN to see how the database is executing your query. This will give you insights into whether indexes are being used, how tables are being scanned, etc.
Example:
EXPLAIN SELECT * FROM users WHERE age > 18;
8. Use Proper Data Types:
Choose the most efficient data types for your columns. For instance, use INTEGER for numeric values rather than VARCHAR, which takes more space and requires more processing.
9. Avoid Functions on Indexed Columns:
Using functions like UPPER(), LOWER(), or DATE() on indexed columns in WHERE clauses can prevent the database from using indexes effectively.
Instead, try to perform transformations outside the query or ensure indexes are used.
10. Database Configuration:
Ensure the database system is configured properly for the hardware it’s running on. For example, memory and cache settings can significantly affect query performance.
Example of Optimized Query:
Non-Optimized Query:
SELECT * FROM ordersWHERE customer_id = 1001AND order_date > '2023-01-01';
This query might perform a full table scan if customer_id and order_date are not indexed.
Optimized Query:
CREATE INDEX idx_customer_order_date ON orders(customer_id, order_date);SELECT order_id, order_date, total_amountFROM ordersWHERE customer_id = 1001AND order_date > '2023-01-01';
In this optimized version, an index on customer_id and order_date helps the database efficiently filter the rows without scanning the entire table.
FROM and/or JOIN WHERE GROUP BY HAVING SELECT DISTINCT ORDER BY LIMIT nad/or OFFSET
Command Types:
References : Aysha Beevi
CAST()
CAST is used to typecast or we can use ::target data type.
SELECT ‘The current date is: ‘ || CURRENT_DATE::TEXT; SELECT ‘2024-12-21’::DATE::TEXT; SELECT CAST(‘2024-12-21’ AS DATE);
|| –> Concatenation operator
DATE functions:
SELECT CURRENT_DATE; — Output: 2024-12-21 SELECT CURRENT_TIME; — Output: 09:15:34.123456+05:30 SELECT NOW(); — Output: 2024-12-21 09:15:34.123456+05:30 SELECT AGE(‘2020-01-01’, ‘2010-01-01’); — Output: 10 years 0 mons 0 days SELECT AGE(‘1990-05-15’); — Output: 34 years 7 mons 6 days (calculated from NOW()) SELECT EXTRACT(YEAR FROM NOW()); — Output: 2024 SELECT EXTRACT(MONTH FROM CURRENT_DATE); — Output: 12 SELECT EXTRACT(DAY FROM TIMESTAMP ‘2024-12-25 10:15:00’); — Output: 25
The DATE_TRUNC() function truncates a date or timestamp to the specified precision. This means it “resets” smaller parts of the date/time to their starting values. SELECT DATE_TRUNC(‘month’, TIMESTAMP ‘2024-12-21 10:45:30’); — Output: 2024-12-01 00:00:00 –> The ‘month’ precision resets the day to the 1st, and the time to 00:00:00. SELECT DATE_TRUNC(‘year’, TIMESTAMP ‘2024-12-21 10:45:30’); — Output: 2024-01-01 00:00:00 SELECT DATE_TRUNC(‘day’, TIMESTAMP ‘2024-12-21 10:45:30’); — Output: 2024-12-21 00:00:00
This is the function that will operate over the specified window. Common window functions include ROW_NUMBER(), RANK(), SUM(), AVG(), etc
.PARTITION BY: (Optional) Divides the result set into partitions to which the window function is applied. Each partition is processed separately.ORDER BY: (Optional) Orders the rows in each partition before the window function is applied.
window_function() OVER (--RANK() or SUM() etc. can come in window_function
PARTITION BY column_name(s)
ORDER BY column_name(s)
);
SELECT
department_id,
employee_id,
salary,
SUM(salary) OVER (PARTITION BY department_id ORDER BY salary) AS running_total
FROM employees;
CURSOR:
DO $$ DECLARE emp_name VARCHAR; emp_salary DECIMAL; emp_cursor CURSOR FOR SELECT name, salary FROM employees; BEGIN OPEN emp_cursor; LOOP FETCH emp_cursor INTO emp_name, emp_salary; EXIT WHEN NOT FOUND; — Exit the loop when no rows are left RAISE NOTICE ‘Employee: %, Salary: %’, emp_name, emp_salary; END LOOP; CLOSE emp_cursor;
Basic Data Types in PostgreSQL
TEXT, VARCHAR, CHAR: Working with strings.
INTEGER, BIGINT, NUMERIC: Handling numbers.
DATE, TIMESTAMP: Date and time handling.
OVER CLAUSE
In PostgreSQL, the OVER() clause is used in window functions to define a window of rows over which a function operates. Just create a serial number (Row_number) from 1 (Rows are already ordered by salary desc) SELECT name, ROW_NUMBER() OVER (ORDER BY salary DESC) AS row_num FROM employees WHERE row_num <= 5;
RANK()
Parition the table records based on the dept id, then inside each partition order by salary desc with rank 1,2,3… – In RANK() if same salary then RANK repeats.
SELECT department_id, name, salary, RANK() OVER (PARTITION BY department_id ORDER BY salary DESC) AS rank FROM employees Output: department_id name salary rank 101 Charlie 70,000 1 101 Alice 50,000 2 101 Frank 50,000 2 102 Eve 75,000 1 102 Bob 60,000 2 103 David 55,000 1
Divides employees into 3 equal salary buckets (quartiles). SELECT id, name, salary, NTILE(3) OVER (ORDER BY salary DESC) AS quartile FROM employees; id name salary quartile 5 Eve 75,000 1 3 Charlie 70,000 1 2 Bob 60,000 2 4 David 55,000 2 1 Alice 50,000 3 6 Frank 50,000 3
Retrieves the first name in each department based on descending salary. SELECT department_id, name, salary, FIRST_VALUE(name) OVER (PARTITION BY department_id ORDER BY salary DESC) AS top_earner FROM employees; Output: department_id name salary top_earner 101 Charlie 70,000 Charlie 101 Alice 50,000 Charlie 101 Frank 50,000 Charlie 102 Eve 75,000 Eve 102 Bob 60,000 Eve 103 David 55,000 David
First from table will be taken, then WHERE condition will be applied
In the WHERE clause directly you cannot call the RANK(), it should be stored in result set, from there only we can call it. So only RANK() will get executed ie Windows CTE (Common Table Expression), that’s why first the CTE will get executed and stored in a temp result set, then SELECT from that result set.
Below we gave in the subquery, so it will get executed and then that value is getting used by the outer query.
In each dept top earner name with his name and salary (consider the above table employees) SELECT department_id, name, salary FROM ( SELECT department_id, name, salary, RANK() OVER (PARTITION BY department_id ORDER BY salary DESC) AS rank FROM employees ) ranked_employees WHERE rank = 1;
department_id name salary 101 Charlie 70,000 102 Eve 75,000 103 David 55,000
Resultset – here RankedSalaries is Resultset
WITH RankedSalaries AS ( SELECT salary, RANK() OVER (ORDER BY salary DESC) AS rank FROM employees ) SELECT salary FROM RankedSalaries WHERE rank = 2;
Here, RankedSalaries is a temporary result set or CTE (Common Table Expression)
Reference: Learnt from ChatGPT and Picture from Ms.Aysha
What is Spring Framework? Before we proceed with the definition let’s first understand what is framework.
Framework in software industry, is an environment where we can see collection of reusable software components or tools which are used to build the applications more efficiently with minimal code and time. It makes the developers life easier.
For example, If we are going to travel and stay in some place: furnished flat will be preferable than setting-up a new home. All available ready-made to pick-up and use. Another example LibreOffice Draw where you can draw, paint or creating logo. Here We will have set of drawing tools, just pick up and use it.
Definition: Spring is a comprehensive framework, provides a broad set of tools and solutions for almost all kind of application development, whether you are building a small standalone application or a complex enterprise system, in particular web applications.
What is Spring Boot? Spring Boot is a layer on top of Spring that simplifies application development, making the developer to focus mostly on the business logic and leaving all boiler plate codes to the spring boot framework.
Spring vs SpringBoot: The main difference is in Spring the developer have the higher responsibility (or must be an Advanced Developer) to handle all work step by step (obviously will take more time) whereas with SpringBoot, we can do the same stuff very easily, quickly and safely (We can do with Spring also, but here SpringBoot will takes care of lot of tasks and minimize the coder’s work)
Ex. Spring – Birthday party event arranging by parents. (Each activity should be taken care of, like venue, invitation, cakes, decoration, food arrangements, return gifts etc.)
Spring Boot – An event organizer will take care of everything, so the parents just concentrate on the child and guests (like business logic) – whatever they want event organizer(spring boot) will assist by providing it.
What are Spring Boot’s Advantages? Spring Boot is a layer on top of Spring that simplifies application development by providing the following:
Faster Setup (Based on the dependencies and annotations).
Simplifies development by Auto Configuration, application.properties or application.yml
Embedded web servers with our finished product (.jar/.war)-eliminates the need for an external server like Tomcat to run the application, during deployment.
Production-Ready features (Ex. Health checks-monitor application’s health, logging etc.)
Simplified Deployment.
Opinionated defaults. (TBD)
Security features.
Community and Ecosystem
Spring Framework’s main advantages are, – Inversion of Control – Dependency Injection
IoC (Inversion of Control): The core principle of the Spring Framework. Usually the program flow or developer will control the execution here as the name suggests, control reversed, ie framework controls the flow. Ex. The event organizer having everything whatever the party or parent needs. It makes the developers with minimal code and better organized.
It makes everything ready to build the application instead searching or creating whenever required . My understanding here is,
It scans the dependencies – based on that creates the required bean, checks the required .jar files are available in the class path.
Through dependency injection passing beans as parameter to another bean when @Autowired detected.
Spring Boot starts and initializes the IoC container (via ApplicationContext – its the container for all beans).IoC scans the classpath for annotated classes like @Component, @Service, @Controller, @Repository.It creates beans (objects) for those classes and makes them available for dependency injection.Spring Boot scans application.properties or application.yml, applying those configurations to the beans or the application as needed.
Dependency Injection (DI): –A design pattern that reduces the connection between the system components making the code more modular, maintainable and testable. It avoids the tight coupling between the classes and make the classes loosely coupled.
Coupling here is one class depends on another class.
For ex. In the same birthday party, if the parents arranged the setup one one theme (Dora-Bujju) for the kid and later the kid changed its mind set and asking for another theme (Julie – Jackie Chan). Now its a wastage of time and money, and parent’s frustration also. Instead, if they tell the organizer to change the theme (as its their work and having some days also) – its easily getting updated.
In Dependency Injection, we can consider like one class wants to use another class, then it should not use its object (bean) directly inside the body (Tight coupling). Future modification is getting tougher here, instead, just pass the bean (object) as a parameter (injecting that bean) into the required class (Constructor DI). In case if the injected bean (passed object as a parameter) wants to get changed in the future, just replace it with another bean(object) in the parameter section.
Below listed are the high level steps involved to create a basic Django application.
install python
use venv before installing django =>python -m venv tutorial-env
activate the venv: tutorial-env\Scripts\activate
install django in the venv=> python -m pip install django
check version =>django-admin –version
Create a django project => django-admin startproject myApp
To start the webserver =>python manage.py runserver
From the myApp location, open cmd and type code .=> which will open vs code for this project from VSCode 1. init.py => when the proj receives a request it will understand that this is a package with the help of this init file 2.asgi & wsgi =>Both required during deployment 3.settings.py =>DB, Language, Timezone, Static, URL etc.,
URLs.py => will contain the list of urls used for the project
outside of myApp, db.sqlite3 will be used by default as a lite weight DB
Within the project myAPP, we can create multiple application. to create a new app => python manage.py startapp blog 1.migrations => DB related
init => represent that it is a pkg
admin => for admin purposes
apps => app related config eg: name of the app etc.,
models => contents 6.tests => used for testing the app 7.views 10.Register the app: from myApp->setting.py under Installed_Apps ->add the recently created app ‘blog’ 11.Create the first View:(in general we will receive the request and send back the response) from blog->views.py 1.import HTTPResponse => from django.http import HttpRespose a. Create a python function which take request as a parameter and return the HttpResponse=>A static string output 2.under blog, create a python file by name “urls.py” a.within that file add the urlpatterns list similar to myApp->urls.pyb.in this file, import path, and view from the project->from . import views c.to the urlpatterns list add and entry to the python function created under views.py path(“”<“” represents home directory>,views.index,name=”index”) 3.In myApp-> urls.py a. import path,include from django.urls b. under urlpatterns, add path(“”,include(“blog.urls”)) –> including the url from the blog->urls.py
Time to test the changes. Go to the application url. it should show the content from views.py->index function
Alternatively if we want to call the index with a seperate url a. from the myApp->urls.py-> in the urlpatterns.path -> instead of “”, provide “blogs/” b. Test the same with both default application url and url/blogs
In the previous demo, I have used seperate Javascript function to achieve the result. But it looks like it is not the best practice to have multiple JS functions and also the HTML content should be isolated from that of the JS content. In my previous example I have used the HTML content within the JS code.
In this demo, I have called just one JS function with different parameters whenever different links are being clicked. Attaching the code for the same
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When we highlight the menu it should change the font color. And when we click on the menu only the content related to that link has to be displayed.
Solution:
I tried this with a basic html page calling seperate function for each menu item onclick event. Attaching the code snippet.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
அன்பு ஆசிரியர் நக்கீரன் அவர்களுக்கு பாசம் கலந்த வணக்கங்கள்.
சில நாட்களுக்கு முன்பு தங்களது துருவங்கள் தொடரை படிப்பதற்கான வாய்ப்பு கிடைத்து அதை படித்தும் முடித்தேன். ஒரு புத்தகத்தை எடுத்து அதை முழுவதுமாக படித்ததாக எனக்கு சரித்திரமே கிடையாது. அந்த வகையில் நானும் ஒரு மழலையே!
முதலில் மதன் கதாபாத்திரத்தை பற்றிய எனது பார்வையில் சில வரிகள்.
“இரத்தம் சதை சுவாசம் இவை அனைத்திலும் linux” ஊறியிருந்தால் மட்டுமே அப்படி ஒரு பாத்திரத்தை எழுத முடியும். அதிலும் மதன் கார்த்திகா இருவருக்கும் இடையில் ஒரு மென்பொருள் பாலமாக அமைந்தது அரிதிலும் அரிது.
மேலும் மதன் இயற்கையாகவே அமைதி விரும்பி போலும். அதாவது அவர் தோற்றம் பார்ப்பதற்கு சிறிது கரடுமுரடாக இருப்பினும் மனம் லீனுஸ்ய்(linux) மேன்மையையும் நேர்மையையும் கொண்டதாகவே நான் உணர்கிறேன்.
அதே சமயத்தில், தனக்கோ தன சுற்றத்திற்கோ ஏதாவது தவறாக நடக்குமாயின் அதையும் லீனுஸ் (linux) லவே இரும்பு கரம் கொண்டு கட்டுப்படுகிறார். மற்றுமோர் குணமாக வரிந்தோருக்கு வழுங்குதலையும் ஒன்றாகவே இருப்பதாக உணர்கிறேன்.
எதிர்முனையில் கார்த்திகா தெளிந்த நீரோடை போன்று எப்போதுமே காணப்படுகிறார்.
கதையில் என்னை கவர்ந்த ஒரு சில இடங்களை சொல்ல முற்படுகிறேன். ஆண்களை எப்போதுமே தவறாக சித்தரிக்கும் இந்த கால சூழ்நிலையில் ஒரு ஆண் எவ்வளவு மென்மையானவன் என்பதை உணர்த்தும் வகையில் மதன் கதாபாத்திரம் எழுதப்பட்டிருந்தது.
அதுவும் குறிப்பாக, கார்த்திகா எப்போதெல்லாம் மதனுடன் தனிமையில் இருக்கிறாரோ, அப்போதெல்லாம் மதனின் ஆண்மைக்கு சவாலாகவே பார்க்கிறேன். ஒரு சராசரி ஆண்மகன், அந்த மாதிரியான சூழ்நிலையில் எவ்வாறு நடந்து கொள்வார் என்று நாம் பார்த்திருப்போம் அல்லது ஆவ்வாரே இதுவரையில் கதையிலும் திரையிலும் காட்டப்பட்டுள்ளது.
குறிப்பாக இருவரும் ஹோட்டல் அறையின் balconyil நின்று பேசும் போதும் சரி கார்த்திகாவின் வீட்டில் குணா படம் பார்க்கும் பொழுதும் சரி ஒரு ஆண் இப்படி தான் நடந்து கொள்ளவேண்டும் என்பதை அழுத்தம்திருத்தமாக சொல்லப்பட்டிருந்தது.
மறுமுனையில் ஒரு பெண் எப்போதுமே பின்பற்ற வேண்டிய அச்சம் மடம் நாணம் போன்ற குணங்கள் சற்றும் பொருந்தாத புதுமை பெண்ணாக வலம் வருகிறார். ஏன் எப்போதும் ஆண் முன்வந்து தன காதலயோ உணர்ச்சியையோ சொல்லவோ காட்டவோ வேண்டும். பாரதி கண்ட புதுமைபெண்ணல்லவோ நான் என்பது போல் துணிச்சலாக தன் உணர்ச்சிகளை வெளிப்படுத்துகிறாள்.
இந்த தொடரின் முக்கியமான வெற்றி எதுவெனில்? எப்போதெல்லாம் மென்பொருள் பற்றிய உரையாடல் நீள்கிறதோ என்ற எண்ணம் மேலோங்குகிறதோ அப்போது காதல் உள்ளே வந்துவிடும். அல்லது எங்கே மென்பொருளை மறந்து காதல் கதையாக இத்தொடர் மாறியதோ என்ற எண்ணம் வரும்பொழுது linux உள்ளே வரும். இப்படி மாறி மாறி கதைக்களம் நகர்வாதாலேயே சுவாரசியம் சற்றும் குறைய வில்லை.
மேலும் நான் இக்கதையை படிக்கும் பொழுது linux மடிக்கணினியை பக்கம் வைத்தே படித்தேன். மதன் கார்த்திகாவிற்கு எடுத்துரைக்கும் விடயங்களை வாசகர்களாகிய நாமும் செய்து பார்க்க முடிந்தது. அதற்காக மதன் கூறிய எல்லாவற்றையும் செய்து பார்க்க முடியுமா என்றால் முடியாது. ஏனெனில் அது இந்த கதையின் நோக்கமும் அல்ல. linux என்னும் தீப்பொறியையும் அதனின் நீட்சியையும் நம்முள் முதல் பாதியில் கத்திவிட்டு, இரண்டாம் பாதியில் லீனிக்ஸின் விஸ்வரூபத்தை விவரித்தது அருமை.
கதை சார்ந்த அரசியல் பேசவும் நீங்கள் மறக்கவில்லை . அதை சுவைக்கு தேவையான அளவு உப்பு சேர்ப்பது போல சேர்த்ததாலேயே இதை எல்லோராலும் ஏற்றுக்கொள்ள முடிகிறது.
ஒரே ஒரு இடத்தில மட்டுமே மதனின் கதாபாத்திரத்தை சற்றே அவசரக்காரர் போல காட்ட முற்பட்டிர்களா என்று தெரியவில்லை. கார்த்திகாவின் அப்பாவிடம் அவர் கோவப்படும் அந்த சூழல் இன்னும் சற்றே விவரித்திருக்கலாமோ என்று தோன்றியது.. ஓரிறிரு கருத்துக்களை மட்டும் வைத்தே அவர் அவ்வாறானவர் என்று முடிவுக்கு வந்து அதற்கு எதிர்வினையாற்றியதும் ஆச்சர்யத்தை ஊட்டியது. என்னை பொறுத்தவரை மதன் அவ்வாறானவர் இல்லை என்பது மட்டும் உறுதி
கதையில் மற்ற கதாபாத்திரங்களும் சிறப்பாகவே சித்தரிக்கப்பட்டிருக்கிறார்கள் உதாரணத்திற்கு கார்த்திகாவின் தோழி , மதனின் நண்பர் உதய் , நண்பரின் காதலி, நண்பரின் அம்மா, கார்த்திகாவின் பெற்றோர், மதன் வேலை செய்யும் அலுவலகத்தின் ceo கடைசியில் வரும் ஒலிபெருக்கியை கையாண்ட பொடியன் வரை அனைவரும் அருமை.
கடைசியாக உங்களின் ARR மீதான காதலை என்னவென்று சொல்வது. எல்லோரும் அந்த பாடல்களை கேட்டிருப்போம் நீங்கள் கூறிய பின்புலத்தோடு கேட்டிருப்போமா என்பது கேள்விக்குறியே. அதுவும் காதலன் படத்தில் வரும் இந்திரயோ பாடல் இதுநாள் வரை வைரமுத்து எழுதியதாகவே என்னை போல் பலரும் நினைத்திருப்பார்கள்.
இதே கதையை இரு ஆண்களுக்குஇடையே நடப்பதாகவோ, இரு பெண்களுக்குஇடையில் நடப்பதாகவோ சொல்லிருந்தால் இவளவு சுவாரசியம் இருந்திருக்குமா என்பது சந்தேகமே. அதேபோல் கிடைக்கும் இடத்தில எல்லாம் linux சார்ந்த தகவல்களை தரவுகளை விட்டுச்சென்றதும் பாராட்டுக்குரியதே. உதாரணமாக IRC, ILUGC. அருமை.
மொத்தமாக, ஒரு வரியில் கூறவேண்டுமாயின். கதை படித்துமுடித்ததும் குறிப்பாக மூவரை என் மனம் பார்க்க வேண்டுமென ஏங்கியது.
1) மதன் – அவருடைய linux ஆற்றலுக்காக
2) கார்த்திகா – நீங்கள் வர்ணித்தது நிஜமா என்று ஆராய
3) நக்கீரன் என்னும் உங்களை – இப்படி ஒரு பித்தனை உருவாக்கியதற்கு. மென்மேலும் எழுதி என்னை போல் பலரையும்
உங்கள் வலைக்குள் சிக்கவைக்க எனது மனமார்ந்த வாழ்த்துக்கள்.
In chapter-3, the author introduces us to a new concept called 80/20. You can see this pattern in almost everything you comes across in our day to day life. Like in a given area/country 20 percent people will govern the rest of 80 percent people or 20 percent people will be wealthier compared to 80 percent, 20 percent of the customers will be responsible for 80 percent of the sales, etc., In the same way 20 percent of your activities will reflect the 80 percent of your results.
So framing the 20 percent is more important and the author is calling that 20 percent as “Vital few” and the remaining 80 percent as “trivial many”. As discussed in the previous chapters, we tend to spend our energy addressing things from the 80 percent and feel that we did not achieve anything. But that has to happen in the reverse order. From the entire pile of the activities you have listed, pick the top items that you feel either difficult to achieve or tedious yet valuable.
Focus should be given to that 20 percent items. So if you knock off the items from the “Vital few” you feel accomplished and achieving the things from 80 percent or the “Trivial many” shouldn’t be that tough.
As a reader even I am trying to follow and implement these principles in my day to day life and will share my experience in the future blogs. But to me this is very much achievable and we all should give this a try.
Happy reading and see you all in the next chapter discussion.
In continuation from Chapter 1, Chapter 2 starts with how to approach the priority list. Because for a normal person[without any priority like me :-)] every task will be a priority. Unlike that, the author is suggesting an alternative.
It goes like this. Take a piece of paper or use a text editor and name that as “Priorities” and start writing all the tasks that comes to your mind which you wanted to do. Not necessarily in a day, week or month. Just random. For eg: Complete reading a book, finish the assignment, save a minimum amount, practice meditation etc., By the end of this activity, you will have an exhaustive list of task that you wish you could complete.
Next, take one more sheet of paper or open one more text editor and name this as monthly. In here, from the list that you created in “Priorities”, pick up those tasks which could be or has to be completed in the next month. From our example, we can choose “Finish an assignment” and add it to the “monthly” list.
Now the monthly list will be comparatively less than the entire priorities and you have a clear idea of what needs to be done in next 30 days. From here, create one more list for “Weekly”. And do the same exercise of choosing the things that has to or could be completed in next 7 days. And start including them in the weekly list.
Hard part is now complete. From here, connect the things that was described in Chapter#1. Pick up the frog that is bigger to eat and add it to your daily list.
From a larger picture, the moment you knock off one task from daily it creates a ripple effect. That is, it knocks off a task from weekly, then monthly and from the entire priorities. You will accomplished by the end of first week. And if we do this on week 2 we will feel even more happier and accomplished.
This is all about Chapter-2. Once again, this is my understanding and nothing to do with authors narration.
See you again in Chapter-3! Thank you for reading!
What is plaintext in my point of view: Its simply text without any makeup or add-on, it is just an organic content. For example,
A handwritten grocery list what our mother used to give to our father
A To-Do List
An essay/composition writing in our school days
Why plaintext is important? – The quality of the content only going to get score here: there is no marketing by giving some beautification or formats. – Less storage – Ideal for long term data storage because Cross-Platform Compatibility – Universal Accessibility. Many s/w using plain text for configuration files (.ini, .conf, .json) – Data interchange (.csv – interchange data into databases or spreadsheet application) – Command line environments, even in cryptography. – Batch Processing: Many batch processes use plain text files to define lists of actions or tasks that need to be executed in a batch mode, such as renaming files, converting data formats, or running programs.
So plain text is simple, powerful and something special we have no doubt about it.
What is IRC? IRC – Internet Relay Chat is a plain text based real time communication System over the internet for one-on-one chat, group chat, online community – making it ideal for discussion.
It’s a popular network for free and open-source software (FOSS) projects and developers in olden days. Ex. many large projects (like Debian, Arch Linux, GNOME, and Python) discussion used. Nowadays also IRC is using by many communities.
Usage : Mainly a discussion chat forum for open-source software developers, technology, and hobbyist communities.
Why IRC? Already we have so many chat platforms which are very advanced and I could use multimedia also there: but this is very basic, right? So Why should I go for this?
Yes it is very basic, but the infrastructure of this IRC is not like other chat platforms. In my point of view the important differences are Privacy and No Ads.
Advantages over other Chat Platforms:
No Ads Or Popups: We are not distracted from other ads or popups because my information is not shared with any companies for tracking or targeted marketing.
Privacy: Many IRC networks do not require your email, mobile number or even registration. You can simply type your name or nick name, select a server and start chatting instantly. Chat Logs also be stored if required.
Open Source and Free: Server, Client – the entire networking model is free and open source. Anybody can install the IRC servers/clients and connect with the network.
Decentralized : As servers are decentralized, it could able to work even one server has some issues and it is down. Users can connect to different servers within the same network which is improving reliability and performance.
Low Latency: Its a free real time communication system with low latency which is very important for technical communities and time sensitive conversations.
Customization and Extensibility: Custom scripts can be written to enhance functionality and IRC supports automation through bots which can record chats, sending notification or moderating channels, etc.
Channel Control: Channel Operators (Group Admin) have fine control over the users like who can join, who can be kicked off.
Light Weight Tool: As its light weight no high end hardware required. IRC can be accessed from even older computers or even low powered devices like Rasberry Pi.
History and Logging: Some IRC Servers allow logging of chats through bots or in local storage.
Inventor IRC is developed by Jarkko Oikarinen (Finland) in 1988.
Some IRC networks/Servers: Libera.Chat(#ubuntu, #debian, #python, #opensource) EFNet-Eris Free Network (#linux, #python, #hackers) IRCnet(#linux, #chat, #help) Undernet(#help, #anime, #music) QuakeNet (#quake, #gamers, #techsupport) DALnet- for both casual users and larger communities (#tech, #gaming, #music)
Directly on the Website – Libera WebClient – https://web.libera.chat/gamja/ You can click Join, then type the channel name (Group) (Ex. #kaniyam)
How to get Connected with IRC: After installed the IRC client, open. Add a new network (e.g., “Libera.Chat”). Set the server to irc.libera.chat (or any of the alternate servers above). Optionally, you can specify a port (default is 6667 for non-SSL, 6697 for SSL). Join a channel like #ubuntu, #python, or #freenode-migrants once you’re connected.
Popular channels to join on libera chat: #ubuntu, #debian, #python, #opensource, #kaniyam
Local Logs: Logs are typically saved in plain text and can be stored locally, allowing you to review past conversations. How to get local logsfrom our System (IRC libera.chat Server) folders – /home//.local/share/weechat/logs/ From Web-IRCBot History: https://ircbot.comm-central.org:8080
I always have the challenge of reading. Whether it is technical documentation, general documentation or anything. If I remember correctly, the last time I read something continuously was when I was in my school and college days. And that too nothing extraordinary but weekly magazines like Anandha Vikatan/Kumudham and very rarely newspapers. That got improved when I started my work and regularly read the news headlines from “The Hindu”. That’s all the reading I have got in my entire life. I have this habit of purchasing the books and will think.. One day.. that One day will come and I will become a Pro Reader and I will read all the books. But that did not happened till date.
So I was pouring all these frustration in the “#Kaniyam” IRC chat along with some more concerns like I have trouble planning things. I use to start with one and if I come across something else I will leave whatever I was doing and start doing the new item and it goes on and on. Then Srini from the Kaniyam IRC group suggested various ideas to give a try and one such idea is reading this book called “Eat the Frog”.
I wouldn’t say the book has changed me completely overnight but the practice of reading a few pages continuously gives a sense of satisfaction. I am not saying I have read 20-30 pages continuously instead I planned to complete a chapter whenever i start.
The book as such has got things we often hear or see elsewhere but more importantly it is structured. When I say it is structured, it starts with the topic explanation on why the author has named the book as “Eat the Frog”.
In our daily life if we think eating a frog is one of our primary task. How will one plan. Because eating a frog is not that easy. And that too if you have more than one frog how will one plan to do that. Here the author compares the frog to that of the tasks we have in a day. Not all tasks are difficult as eating a frog. So if we have frogs of different size and the task is to complete eating them all in a day. How will one approach. He will target finishing the bigger one then the next then the next and it goes on. By the time one completes the biggest he will get the confidence to go for the next smaller sized frog.
This analogy works the same way for our daily tasks. Rather than picking the easy ones and save the bulk or harder tasks for a later time, plan to finish the harder or most difficult task first which will help us move with the next difficult task with a lot more confidence.
This was primarily discussed on Chapter 1. After reading this I wanted to see if this approach works. I started implementing it immediately but listing the items it wanted to complete for that day. And in that I sorted those items based on the difficulty(in terms of time). I did not create an exhaustive list rather 4 tasks for that day and out of which 2 are time taking or difficult task.
End of the day I was able to complete the top 2 leaving the remaining 2. I still felt happy because i completed the top 2 which is harder. And moved the pending 2 to next day and kept the priority as top for those 2.
So far it is working and I will continue to write about the other chapters as I complete reading them.
“Let us all start get into the habit of reading and celebrate..happy reading”
// 1. Reverse an ArrayList without using inbuilt method
// 2. Find Duplicate Elements in a List
// 3. Alphabetical Order and Ascending Order (Done in ArrayList)
// 4. Merge Two Lists and Remove Duplicates
// 5. Removing Even Nos from the List
// 6. Array to List, List to Array
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
public class CollectionsInJava {
public static void main(String[] args) {
// 1. Reverse an ArrayList without using inbuilt method
// 2. Find Duplicate Elements in a List
// 3. Alphabetical Order and Ascending Order (Done in ArrayList)
// 4. Merge Two Lists and Remove Duplicates
// 5. Removing Even Nos from the List
// 6. Array to List, List to Array
ArrayList<String> names = new ArrayList<>(Arrays.asList("Abinaya", "Ramya", "Gowri", "Swetha", "Sugi", "Anusuya", "Moogambigai","Jasima","Aysha"));
ArrayList<Integer> al2 = new ArrayList<>(Arrays.asList(100,90,30,20,60,40));
ArrayList<Integer> al = insertValuesIntoAL();
System.out.println("Before Reversing ArrayList="+ al);
System.out.println("Reversed ArrayList="+ reverseArrayList(al));
System.out.println("Duplicates in ArrayList="+findDuplicates(al));
System.out.println("Before Order = "+names);
Collections.sort(names);
System.out.println("After Alphabetical Order = " + names);
Collections.sort(al);
System.out.println("Ascending Order = "+ al);
System.out.println("List -1 = "+al);
System.out.println("List -2 = "+al2);
System.out.println("After Merging and Removing Duplicates="+mergeTwoLists(al,al2));
System.out.println("After Removing Even Nos fromt the List-1 = "+removeEvenNos(al));
arrayToListViceVersa(al,new int[] {11,12,13,14,15}); //Sending ArrayList and anonymous array
}
// 1. Reverse an ArrayList without using inbuilt method
private static ArrayList<Integer> reverseArrayList(ArrayList<Integer> al) {
int n=al.size();
int j=n-1, mid=n/2;
for (int i=0; i<mid; i++) {
int temp = al.get(i);
al.set(i, al.get(j));
al.set(j--, temp);
}
return al;
}
// 2. Find Duplicate Elements in a List
private static ArrayList<Integer> findDuplicates(ArrayList<Integer> al) {
HashSet<Integer> hs = new HashSet<>();
ArrayList<Integer> arl = new ArrayList<>();
for (int ele:al) {
if (!hs.add(ele)) arl.add(ele);
}
return arl;
}
//4. Merge Two Lists into one and Remove Duplicates
private static HashSet<Integer> mergeTwoLists(ArrayList<Integer> arl1,ArrayList<Integer> arl2) {
ArrayList<Integer> resAl = new ArrayList<>();
HashSet<Integer> hs = new HashSet<>();
hs.addAll(arl1);
hs.addAll(arl2);
return hs;
}
// 5. Removing Even Nos from the List
private static ArrayList<Integer> removeEvenNos(ArrayList<Integer> al) {
ArrayList<Integer> res = new ArrayList<>();
Iterator itr = al.iterator();
while (itr.hasNext()) {
int ele = (int)itr.next();
if (ele%2==1) res.add(ele);
}
return res;
}
// 6. Array to List, List to Array
private static void arrayToListViceVersa(ArrayList<Integer> arl, int[] ar) {
Integer arr[] = arl.toArray(new Integer[0]);
System.out.println("Convert List to Array = " + Arrays.toString(arr));
List<Integer> lst = Arrays.asList(arr);
System.out.println("Convert Array to List = " + lst);
}
private static ArrayList<Integer> insertValuesIntoAL() {
Integer[] ar = {30,40,60,10,94,23,05,46, 40, 94};
ArrayList<Integer> arl = new ArrayList<>();
Collections.addAll(arl, ar);
//Collections.reverse(al); //IN BUILT METHOD
return arl;
//Arrays.sort(ar);
//List lst = Arrays.asList(ar); //TBD
//return new ArrayList<Integer>(lst);
}
}