Want to lock your table but only on DDL operations? Use the below code to ensure no one is altering your tables. To achieve this we will create a trigger on all alter table commands, then filter it down by table name and finally throw an exception telling the user that they cannot alter the table.
Do you want need a quick solution without going into the hassle of setting up a Database? If your answer to any of those questions was a yes, then youβve come to the right place. This post will show you how you can use Google sheets as your database.
For the purposes of this blogpost I will be usiing this Google sheet.
As you can see, we will be collecting the following data from the user β Name, Email and Age.
Create the API
Go to the google sheet you want to use.
Create column headers in the first column
Click on tools> script editor
Copy the following code to the editor
Click on run>run function> setup.
Now publish your script to get the request URL with the following settings.
At FreeTamilEbooks.com we have published 850 ebooks. All in sharable creative commons license. There are many people asking for the text only content of all these books many times. As it is a big task, took long time for it. Thanks to Lenin, Anwar of Kaniyam Foundation, all the contributors, all the writers and readers for making this project alive and a great success.
We are publishing the books as epub format, along with PDF format. Epub is just a zip file of HTML files. So, we can copy all the content from it as unicode text. Pandoc is a wonderful open source software, which can convert an epub to plaintext file.
There are the list of actions we have to do.
Get URLs of all the 850+ epub files
Download them all.
using pandoc, convert to text file.
So far, we dont have a metadata file for all the books published. Getting the links of all epub files need some programming. As Python is a swiss knife to automate anything, started to explore the wordpress REST api with python to get all the books pages content.
At FreeTamilEbooks.com we have published 850 ebooks. All in sharable creative commons license. There are many people asking for the text only content of all these books many times. As it is a big task, took long time for it. Thanks to Lenin, Anwar of Kaniyam Foundation, all the contributors, all the writers and readers for making this project alive and a great success.
We are publishing the books as epub format, along with PDF format. Epub is just a zip file of HTML files. So, we can copy all the content from it as unicode text. Pandoc is a wonderful open source software, which can convert an epub to plaintext file.
There are the list of actions we have to do.
Get URLs of all the 850+ epub files
Download them all.
using pandoc, convert to text file.
So far, we dont have a metadata file for all the books published. Getting the links of all epub files need some programming. As Python is a swiss knife to automate anything, started to explore the wordpress REST api with python to get all the books pages content.
At Kaniyam Foundation, we have a dream of collecting publishing TerraBytes of Tamil text data for Tamil LLM and other research works. We are documenting the websites that provide Open Licensed tamil content, like Public Domain, Creative Commons license here. https://github.com/KaniyamFoundation/ProjectIdeas/issues/198
From here, we can get the websites, scrap them and use and share the data.
Firstly, Today, I started to explore the tamil wikipedia data.
All the wikepedia content are stored as XML and SQL files here.
It ran for 2 minutes and gave a file wiki.txt for 627 MB. It has all the articles content as a one single big plaintext file.
Compressed it with 7z as it gives better compression.
mv wiki.txt tawiki-20240501-pages-article-wiki.txt
7z a tawiki-20240501-pages-article-text.7z tawiki-20240501-pages-article-wiki.txt
it is 70 MB
Like this, will continue to get plain text tamil data from various sources. We have to find, where we can publish few 100 GBs to TBs of data, for free. Till then, will share these files on my self hosted desktop PC at my home.
At Kaniyam Foundation, we have a dream of collecting publishing TerraBytes of Tamil text data for Tamil LLM and other research works. We are documenting the websites that provide Open Licensed tamil content, like Public Domain, Creative Commons license here. https://github.com/KaniyamFoundation/ProjectIdeas/issues/198
From here, we can get the websites, scrap them and use and share the data.
Firstly, Today, I started to explore the tamil wikipedia data.
All the wikepedia content are stored as XML and SQL files here.
It ran for 2 minutes and gave a file wiki.txt for 627 MB. It has all the articles content as a one single big plaintext file.
Compressed it with 7z as it gives better compression.
mv wiki.txt tawiki-20240501-pages-article-wiki.txt
7z a tawiki-20240501-pages-article-text.7z tawiki-20240501-pages-article-wiki.txt
it is 70 MB
Like this, will continue to get plain text tamil data from various sources. We have to find, where we can publish few 100 GBs to TBs of data, for free. Till then, will share these files on my self hosted desktop PC at my home.