Blog

Adding Custom LLM APIs Fine Tuned LLMs Galileo

Best practices for building LLMs

custom llm

Integrating CrewAI with different LLMs expands the framework’s versatility, allowing for customized, efficient AI solutions across various domains and platforms. Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, and Mistral AI. Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below. CrewAI offers flexibility in connecting to various LLMs, including local models via Ollama and different APIs like Azure. It’s compatible with all LangChain LLM components, enabling diverse integrations for tailored AI solutions.

You can design LLM models on-premises or using Hyperscaler’s cloud-based options. Cloud services are simple, scalable, and offloading technology with the ability to utilize clearly defined services. Use Low-cost service using open source and free language models to reduce the cost. Fine-tuning Large Language Models (LLMs) has become essential for enterprises seeking to optimize their operational processes. Tailoring LLMs for distinct tasks, industries, or datasets extends the capabilities of these models, ensuring their relevance and value in a dynamic digital landscape. Looking ahead, ongoing exploration and innovation in LLMs, coupled with refined fine-tuning methodologies, are poised to advance the development of smarter, more efficient, and contextually aware AI systems.

This limits its ability to understand the context and make accurate predictions fully, affecting the model’s overall performance. Large Language Models (LLMs) are foundation models that utilize deep learning in natural language processing (NLP) and natural language generation (NLG) tasks. They are designed to learn the complexity and linkages of language by being pre-trained on vast amounts of data. This pre-training involves techniques such as fine-tuning, in-context learning, and zero/one/few-shot learning, allowing these models to be adapted for certain specific tasks. This paradigm shift is driven by the recognition of the transformative potential held by smaller, custom-trained models that leverage domain-specific data. These models surpass the performance of broad-spectrum models like GPT-3.5, which serves as the foundation for ChatGPT.

We support a wide variety of GPU cards, providing fast processing speeds and reliable uptime for complex applications such as deep learning algorithms and simulations. Additionally, our expert support team is available 24/7 to assist with any technical challenges that may arise. The quality of RAG is highly dependent on the quality of the embedding model. If the embeddings don’t capture the right features from the documents and match them to the user prompts, then the RAG pipeline will not be able to retrieve relevant documents. Machine learning is a sub-field of AI that develops statistical models and algorithms, enabling computers to learn and perform tasks as efficiently as humans. Auto-GPT is an autonomous tool that allows large language models (LLMs) to operate autonomously, enabling them to think, plan and execute actions without constant human intervention.

For example, you can implement encryption, access controls and other security measures that are appropriate for your data and your organization’s security policies. Pretraining can be done using various architectures, including autoencoders, recurrent neural networks (RNNs) and transformers. The most well-known pretraining models based on transformers are BERT and GPT. Pre-trained embedding models can offer well-trained embeddings which are trained on a large corpus.

By providing these instructions and examples, the LLM understands that you’re asking it to infer what you need and so will generate a contextually relevant output. Generative AI coding tools are powered by LLMs, and today’s LLMs are structured as transformers. The transformer architecture makes the model good at connecting the dots between data, but the model still needs to learn what data to process and in what order. Well, the ability of LLMs to produce high-level output lies in their embeddings. Embeddings are capable of condensing a huge volume of textual data that encapsulates both semantic and syntactic meanings. Their ability to store rich representations of textual information allows LLM to produce high-level contextual outputs.

Parameter-efficient fine-tuning (PEFT) techniques use clever optimizations to selectively add and update few parameters or layers to the original LLM architecture. Pretrained LLM weights are kept frozen and significantly fewer parameters are updated custom llm during PEFT using domain and task-specific datasets. The basis of their training is specialized datasets and domain-specific content. Factors like model size, training dataset volume, and target domain complexity fuel their resource hunger.

Halfway through the data generation process, contributors were allowed to answer questions posed by other contributors. The dataset used for the Databricks Dolly model is called “databricks-dolly-15k,” which consists of more than 15,000 prompt/response pairs generated by Databricks employees. These pairs were created in eight different instruction categories, including the seven outlined in the InstructGPT paper and an open-ended free-form category.

The embedding layer takes the input, a sequence of words, and turns each word into a vector representation. This vector representation of the word captures the meaning of the word, along with its relationship with other words. The hit rate metric helps to determine how well the model performs in retrieving documents that match the query, indicating its relevance and retrieval accuracy. For instance, words like “tea”, “coffee” and “cookie” will be represented close together compared to “tea” and “car”. This approach of representing textual knowledge leads to capturing better semantic and syntactic meanings.

Training an LLM to meet specific business needs can result in an array of benefits. For example, a retrained LLM can generate responses that are tailored to specific products or workflows. Every application has a different flavor, but the basic underpinnings of those applications overlap.

The sections below first walk through the notebook while summarizing the main concepts. Then this notebook will be extended to carry out prompt learning on larger NeMo models. Prompt learning within the context of NeMo refers to two parameter-efficient fine-tuning techniques, as detailed below. For more information, see Adapting P-Tuning to Solve Non-English Downstream Tasks.

As you can see that our fine-tuned model’s (ft_gist) hit rate it quite impressive even though it is trained on less data for epochs. Essentially, our fine-tuned model is now able to outperform the pre-trained model (pre_trained_gist) in retrieving relevant documents that match the query. The hit rate metric is a measure used to evaluate the performance of the model in retrieving relevant documents. Essentially a hit occurs when the retrieved documents contain the ground-truth context. This metric is crucial for assessing the effectiveness of the fine-tuned embedding model. For those eager to delve deeper into the capabilities of LangChain and enhance their proficiency in creating custom LLM models, additional learning resources are available.

How To Improve Machine Learning Model Accuracy

Instead, you may need to spend a little time with the documentation that’s already out there, at which point you will be able to experiment with the model as well as fine-tune it. For this tutorial we are not going to track our training metrics, so let’s disable Weights and Biases. The W&B Platform constitutes a fundamental collection of robust components for monitoring, visualizing data and models, and conveying the results. To deactivate Weights and Biases during the fine-tuning process, set the below environment property. This fine-tuned adapter is then loaded into the pre-trained model and used for inference. By following these steps, you’ll be able to customize your own model, interact with it, and begin exploring the world of large language models with ease.

Companies need to recognize the implications of using these advanced models. While LLMs offer immense benefits, businesses must be mindful of the limitations and challenges they may pose. Industries continue to explore and develop custom LLMs so they work precisely according to their vision.

How to use LLMs to create custom embedding models – TechTalks

How to use LLMs to create custom embedding models.

Posted: Mon, 08 Jan 2024 08:00:00 GMT [source]

By constructing and deploying private LLMs, organizations not only fulfill legal requirements but also foster trust among stakeholders by demonstrating a commitment to responsible and compliant AI practices. Building your private LLM lets you fine-tune the model to your specific domain or use case. This fine-tuning can be done by training the model on a smaller, domain-specific dataset relevant to your specific use case. This approach ensures the model performs better for your specific use case than general-purpose models. Embedding is a crucial component of LLMs, enabling them to map words or tokens to dense, low-dimensional vectors.

The data collected for training is gathered from the internet, primarily from social media, websites, platforms, academic papers, etc. All this corpus of data ensures the training data is as classified as possible, eventually portraying the improved general cross-domain knowledge for large-scale language models. Multilingual models are trained on diverse language datasets and can process and produce text in different languages. They are helpful for tasks like cross-lingual information retrieval, multilingual bots, or machine translation. All in all, transformer models played a significant role in natural language processing. As companies started leveraging this revolutionary technology and developing LLM models of their own, businesses and tech professionals alike must comprehend how this technology works.

It is essential to ensure that these sequences do not surpass the model’s maximum token limit. The researchers have not released any source code or data for their experiments. But you can see a very simplified version of the pipeline in this Python notebook that I created. Naturally, this is a very flexible process and you can easily customize the templates based on your needs. By understanding the architecture of generative AI, enterprises can make informed decisions about which models and techniques to use for different use cases.

Mistral Large and Mixtral 8x22B LLMs Now Powered by NVIDIA NIM and NVIDIA API

Especially crucial is understanding how these models handle natural language queries, enabling them to respond accurately to human questions and requests. Before diving into building your custom LLM with LangChain, it’s crucial to set clear goals for your project. Are you aiming to improve language understanding in chatbots or enhance text generation capabilities? Planning your project meticulously from the outset will streamline the development process and ensure that your custom LLM aligns perfectly with your objectives. This query can also be created by an upstream LLM too, the specifics do not matter so long the sentence is mostly well-formed.

If you have foundational LLMs trained on large amounts of raw internet data, some of the information in there is likely to have grown stale. From what we’ve seen, doing this right involves fine-tuning an LLM with a unique set of instructions. For example, one that changes based on the task or different properties of the data such as length, so that it adapts to the new data. Microsoft recently open-sourced the Phi-2, a Small Language Model(SLM) with 2.7 billion parameters. This language model exhibits remarkable reasoning and language understanding capabilities, achieving state-of-the-art performance among base language models.

We’ll use the bitsandbytes library to quantize the model, as it has a nice integration with transformers. All we need to do is define a bitsandbytes config, and then use it when loading the model. In banking and finance, custom LLMs automate customer support, provide advanced financial guidance, assess risks, and detect fraud.

Tokenization helps to reduce the complexity of text data, making it easier for machine learning models to process and understand. One of the ways we collect this type of information is through a tradition we call “Follow-Me-Homes,” where we sit down with our end customers, listen to their pain points, and observe how they use our products. We’ve developed this process so we can repeat it iteratively to create increasingly high-quality datasets.

The selection is based on the conversation history, the history will be

embedded and the most similar responses will be selected. The default implementation embedds the generated intent label and all intent

labels from the domain and returns the closest intent label from the domain. By default, only the intent

labels that are used in the few shot examples are included in the prompt. Moreover, it is equally important to note that no one-size-fits-all evaluation metric exists. Therefore, it is essential to use a variety of different evaluation methods to get a wholesome picture of the LLM’s performance.

Now, this class allows us to use the set of tools available in LangChain. I also provide an additional example in the accompanying notebook, demonstrating how to use this class for extracting topics from PDF documents. In my previous article, I discussed an efficient method for extracting the main topics from a PDF document. It involved a single call to a LLM and utilization of the Latent Dirichlet Allocation algorithm. This was an example of the power of combining existing NLP techniques in with LLMs. Note that for this particular implementation, we initialized our Mistral7B model with an additional tokenizer parameter, as this is required in the decoding step of the generate() method.

​Background on fine-tuning

This flexibility allows for the creation of complex applications that leverage the power of language models effectively. The provided code example and reference serve as a starting point for you to build and customize your integration based on your specific needs. Fine-tuning can help achieve the best accuracy on a range of use cases as compared to other customization approaches. A detailed analysis must consist of an appropriate approach and benchmarks.

The advantage of transfer learning is that it allows the model to leverage the vast amount of general language knowledge learned during pre-training. This means the model can learn more quickly and accurately from smaller, labeled datasets, reducing the need for large labeled datasets and extensive training for each new task. Transfer learning can significantly reduce the time and resources required to train a model for a new task, making it a highly efficient approach. With the growing use of large language models in various fields, there is a rising concern about the privacy and security of data used to train these models. Many pre-trained LLMs available today are trained on public datasets containing sensitive information, such as personal or proprietary data, that could be misused if accessed by unauthorized entities.

I’m eager to develop a Large Language Model (LLM) that emulates ChatGPT, tailored precisely to my specific dataset. After the RM is trained, stage 3 of RLHF focuses on fine-tuning the initial policy model against the RM using reinforcement learning with a proximal policy optimization (PPO) algorithm. These three stages of RLHF performed iteratively enable LLMs to generate outputs that are more aligned with human preferences and can follow instructions more effectively. A dataset consisting of prompts with multiple responses ranked by humans is used to train the RM to predict human preference. NVIDIA NeMo is an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere.

Launches Solution for Enterprises to Customize LLMs Appen – Appen

Launches Solution for Enterprises to Customize LLMs Appen.

Posted: Tue, 26 Mar 2024 07:00:00 GMT [source]

Most default metrics offered by deepeval are LLM-Evals, which means they are evaluated using LLMs. This is delibrate because LLM-Evals are versitle in nature and better aligns with human expectations when compared to traditional model based approaches. With all the prep work complete, it’s time to perform the model retraining.

In the rest of this article, we discuss fine-tuning LLMs and scenarios where it can be a powerful tool. We also share some best practices and lessons learned from our first-hand experiences with building, iterating, and implementing custom LLMs within an enterprise software development organization. Generative AI has grown from an interesting research topic into an industry-changing technology. Many companies are racing to integrate GenAI features into their products and engineering workflows, but the process is more complicated than it might seem.

A custom metric is a type of metric you can easily create by implementing abstract methods and properties of base classes provided by deepeval. They are extremely versitle and seamlessly integrate with Confident AI without requiring any additional setup. As you’ll see later, a custom metric can either be an LLM-Eval (LLM evaluated) or classic metric.

By receiving this training, custom LLMs become finely tuned experts in their respective domains. They acquire the knowledge and skills necessary to deliver precise and valuable insights. Sometimes, people have the most unique questions, and one can’t blame them! Custom LLMs can generate tailored responses to customer queries, offer 24/7 support, and boost efficiency.

NeMo leverages the PyTorch Lightning interface, so training can be done as simply as invoking a trainer.fit(model) statement. It excels in generating human-like text, understanding context, and producing diverse outputs. As shopping for designer brands versus thrift store finds, Custom LLMs’ licensing fees can vary. You’ve got the open-source large language models with lesser fees, and then the ritzy ones with heftier tags for commercial use. This comparative analysis offers a thorough investigation of the traits, uses, and consequences of these two categories of large language models to shed light on them. It involves setting up a backend server that handles

text exchanges with Retell server to provide responses to user.

During the pre-training phase, LLMs are trained to forecast the next token in the text. Recently, «OpenChat,» – the latest dialog-optimized large language model inspired by LLaMA-13B, achieved 105.7% of the ChatGPT score on the Vicuna GPT-4 evaluation. The Feedforward layer of an LLM is made of several entirely connected layers that transform the input embeddings. While doing this, these layers allow the model to extract higher-level abstractions – that is, to acknowledge the user’s intent with the text input. Besides, transformer models work with self-attention mechanisms, which allows the model to learn faster than conventional extended short-term memory models. And self-attention allows the transformer model to encapsulate different parts of the sequence, or the complete sentence, to create predictions.

Legal professionals can benefit from LLM-generated insights on case law, statutes, and legal precedents, leading to well-informed strategies. By fine-tuning the LLMs with legal terminology and nuances, organizations can streamline due diligence processes and ensure compliance with ever-evolving regulations. Furthermore, organizations can generate content while maintaining confidentiality, as private LLMs generate information without sharing sensitive data externally. They also help address fairness and non-discrimination provisions through bias mitigation.

For organizations aiming to scale without breaking the bank on hardware, it’s a tricky task. Custom and general Language Models vary notably, impacting their usability and scalability. When comparing the computing needs for training and inference, these differences become evident, offering valuable insights into model selection. They’re like linguistic gymnasts, flipping from topic to topic with ease.

To embark on your journey of creating a LangChain Chat GPT, the first step is to set up your environment correctly. This involves installing LangChain and its necessary dependencies, as well as familiarizing yourself with the basics of the framework. A simple way to do this is to upload your files (PDFs, Wod docs, virtually any type is supported), then generate reports using prompts based on those uploaded files.

Mastering Language: Custom LLM Development Services for Your Business

The model augmented with mined triplets from the MTEB Classification training datasets. This augmentation enables direct encoding of queries for retrieval tasks without crafting instructions. Enterprises need custom models to tailor the language processing capabilities to their specific use cases and domain knowledge. Custom LLMs enable a business to generate and understand text more efficiently and accurately within a certain industry or organizational context. These models are trained on vast amounts of data, allowing them to learn the nuances of language and predict contextually relevant outputs.

custom llm

While these models can provide great generalization across various domains they might not be so good for domain-specific tasks. To address that we need to improve the embeddings to make them much more adaptable to the domain-specific tasks. Selecting the right data sources is crucial for training a robust custom LLM within LangChain.

Embedding models create numerical representations that capture the main features of the input data. For example, word embeddings capture the semantical meanings of words, and sentence embeddings capture the relationships between words in a sentence. Embeddings are useful for various tasks, such as comparing the similarity of two words, sentences, or texts. The legal industry can utilize custom LLMs to improve the efficiency, accuracy, and accessibility of legal services. These models can assist in document review, legal research, and case analysis, saving time and reducing costs.

Step 1: Data processing

Defense and intelligence agencies handle highly classified information related to national security, intelligence gathering, and strategic planning. Within this context, private Large Language Models (LLMs) offer invaluable support. By analyzing intricate security threats, deciphering encrypted communications, and generating actionable insights, these LLMs empower agencies to swiftly and comprehensively assess potential risks. The role of private LLMs in enhancing threat detection, intelligence decoding, and strategic decision-making is paramount. Dolly does exhibit a surprisingly high-quality instruction-following behavior that is not characteristic of the foundation model on which it is based. This makes Dolly an excellent choice for businesses that want to build their LLMs on a proven model specifically designed for instruction following.

  • Are you aiming to improve language understanding in chatbots or enhance text generation capabilities?
  • The legal industry can utilize custom LLMs to improve the efficiency, accuracy, and accessibility of legal services.
  • I predict that the GPU price reduction and open-source software will lower LLMS creation costs in the near future, so get ready and start creating custom LLMs to gain a business edge.
  • The Dolly model achieved a perplexity score of around 20 on the C4 dataset, which is a large corpus of text used to train language models.

From a single public checkpoint, these models can be adapted to numerous NLP applications through a parameter-efficient, compute-efficient process. You can foun additiona information about ai customer service and artificial intelligence and NLP. The prompt contains all the 10 virtual tokens at the beginning, followed by the context, the question, and finally the answer. The corresponding fields in the training data JSON object will be mapped to this prompt template to form complete training examples. NeMo supports pruning specific fields to meet the model token length limit (typically 2,048 tokens for Nemo public models using the HuggingFace GPT-2 tokenizer). In our detailed analysis, we’ll pit custom large language models against general-purpose ones.

LLMs leverage attention mechanisms for contextual understanding, enabling them to capture long-range dependencies in text. Additionally, large-scale computational resources, including powerful GPUs or TPUs, are essential for training these massive models efficiently. Regularization techniques and optimization strategies are also applied to manage the model’s complexity and improve training stability.

custom llm

Finetuning LLMs is all about optimizing the model according to your needs. Conventional language models were evaluated using intrinsic methods like bits per character, perplexity, BLUE score, etc. These metric parameters track the performance on the language aspect, i.e., how good the model is at predicting the next word. Our fine-tuned model outperforms the pre-trained model by approximately 1%.

custom llm

Now, let’s perform inference using the same input but with the PEFT model, as we did previously in step 7 with the original model. Note the rank (r) hyper-parameter, which defines the rank/dimension of the adapter to be trained. R is the rank of the low-rank matrix used in the adapters, which thus controls the number of parameters trained.

These records were generated by Databricks employees, who worked in various capability domains outlined in the InstructGPT paper. These domains include brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. In addition, transfer learning can also help to improve the accuracy and robustness of the model.

Utilizing the existing knowledge embedded in the pre-trained model allows for achieving high performance on specific tasks with substantially reduced data and computational requirements. One of the important applications of embeddings is retrieval augmented generation (RAG) with LLMs. In RAG, embeddings help find and retrieve documents that are relevant to a user’s prompt. The content of retrieved documents is inserted into the prompt and the LLM is instructed to generate its response based on the documents. RAG enables LLMs to avoid hallucinations and accomplish tasks involving information beyond its training dataset. In the context of LLM development, an example of a successful model is Databricks’ Dolly.

custom llm

These nodes contain metadata that captures the neighbouring sentences, with references to preceding and succeeding sentences. Now, it is certain that most of the time this phase can be really tedious and time consuming and benchmarking an AI model on any random data is not well supported in practice as it might lead to biased results. So in this section we will explore a different approach based on synthetic data to engineer data for fine-tuning an embedding model. Preparing the dataset is the first step for fine-tuning an embedding model. In another sense, even if you download the data from any source you must engineer it well enough so that the model is able to process the data and yield valuable outputs.

These models are susceptible to biases in the training data, especially if it wasn’t adequately vetted. Specialized models can improve NLP tasks’ efficiency and accuracy, making interactions more intuitive and relevant. Custom LLMs have quickly become popular in a variety of sectors, including healthcare, law, finance, and more.

These models incorporate several techniques to minimize the exposure of user data during both the training and inference stages. Attention mechanisms in LLMs allow the model to focus selectively on specific parts of the input, depending on the context of the task at hand. This article delves deeper into large language models, exploring how they work, the different types of models available and their applications in various fields. And by the end of this article, you will know how to build a private LLM. At this step, the dataset still contains raw data with code of arbitraty length. Let’s create an Iterable dataset that would return constant-length chunks of tokens from a stream of text files.

Autoregressive models are generally used for generating long-form text, such as articles or stories, as they have a strong sense of coherence and can maintain a consistent writing style. However, they can sometimes generate text that is repetitive or lacks diversity. These are similar to any other kind of model training you may run, so we won’t go into detail here. To train a model using LoRA technique, we need to wrap the base model as a PeftModel. This involves definign LoRA configuration with LoraConfig, and wrapping the original model with get_peft_model() using the LoraConfig. This will allow us to reduce memory usage, as quantization represents data with fewer bits.

Language plays a fundamental role in human communication, and in today’s online era of ever-increasing data, it is inevitable to create tools to analyze, comprehend, and communicate coherently. It’s important to understand that all our publicly available models, like

mixtral 8×7, are shared https://chat.openai.com/ among many

users, and this lets us offer very competitive pricing as a result. When you

run your own model, you get full access to the GPUs and pay per GPU/hours your

model is up. It is a fine-tuned version of Mistral-7B and also contains 7 billion parameters similar to Mistral-7B.

Dolly is a large language model specifically designed to follow instructions and was trained on the Databricks machine-learning platform. The model is licensed for commercial use, making it an excellent choice for businesses looking to develop LLMs for their operations. Dolly is based on pythia-12b and was trained on approximately 15,000 instruction/response fine-tuning records, known as databricks-dolly-15k.

No Comments

Leave a Reply

Our website uses cookies. To improve your experience, we use cookies to collect statistics on how our site is used and to optimize site functionality. Click Agree and Proceed to accept cookies and go directly to the site or More info to see a description of the types of cookies we use and the choices available to you. View more
Cookies settings
Agree and Proceed
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active

Privacy Policy

Updated at 2021-05-23
Makata Investment Services (“we,” “our,” or “us”) is committed to protecting your privacy. This Privacy Policy explains how your personal information is collected, used, and disclosed by Makata Investment Services. This Privacy Policy applies to our website, and its associated subdomains (collectively, our “Service”) alongside our application, Makata Investment Services. By accessing or using our Service, you signify that you have read, understood, and agree to our collection, storage, use, and disclosure of your personal information as described in this Privacy Policy and our Terms of Service.

Definitions and key terms

To help explain things as clearly as possible in this Privacy Policy, every time any of these terms are referenced, are strictly defined as:
  • Cookie: small amount of data generated by a website and saved by your web browser. It is used to identify your browser, provide analytics, remember information about you such as your language preference or login information.
  • Company: when this policy mentions “Company,” “we,” “us,” or “our,” it refers to Makata Investment Services, 48 Warwick Street, Piccadilly Circus, W1B 5AW London, England, United Kingdom that is responsible for your information under this Privacy Policy.
  • Country: where Makata Investment Services or the owners/founders of Makata Investment Services are based, in this case is NO
  • Customer: refers to the company, organization or person that signs up to use the Makata Investment Services Service to manage the relationships with your consumers or service users.
  • Device: any internet connected device such as a phone, tablet, computer or any other device that can be used to visit Makata Investment Services and use the services.
  • IP address: Every device connected to the Internet is assigned a number known as an Internet protocol (IP) address. These numbers are usually assigned in geographic blocks. An IP address can often be used to identify the location from which a device is connecting to the Internet.
  • Personnel: refers to those individuals who are employed by Makata Investment Services or are under contract to perform a service on behalf of one of the parties.
  • Personal Data: any information that directly, indirectly, or in connection with other information — including a personal identification number — allows for the identification or identifiability of a natural person.
  • Service: refers to the service provided by Makata Investment Services as described in the relative terms (if available) and on this platform.
  • Third-party service: refers to advertisers, contest sponsors, promotional and marketing partners, and others who provide our content or whose products or services we think may interest you.
  • Website: Makata Investment Services's site, which can be accessed via this URL: https://www.makatais.com
  • You: a person or entity that is registered with Makata Investment Services to use the Services.
 

What Information Do We Collect?

We collect information from you when you visit our website, register on our site, place an order, subscribe to our newsletter, respond to a survey or fill out a form.
  • Name / Username
  • Phone Numbers
  • Email Addresses
We also collect information from mobile devices for a better user experience, although these features are completely optional:
  • Location (GPS): Location data helps to create an accurate representation of your interests, and this can be used to bring more targeted and relevant ads to potential customers.
  • Phonebook (Contacts list): Your contacts list allows the website to be much more easy to use by the user, since accessing your contacts from the app makes you save tons of time.

How Do We Use The Information We Collect?

Any of the information we collect from you may be used in one of the following ways:
  • To personalize your experience (your information helps us to better respond to your individual needs)
  • To improve our website (we continually strive to improve our website offerings based on the information and feedback we receive from you)
  • To improve customer service (your information helps us to more effectively respond to your customer service requests and support needs)
  • To process transactions
  • To administer a contest, promotion, survey or other site feature
  • To send periodic emails
 

When does Makata Investment Services use end user information from third parties?

Makata Investment Services will collect End User Data necessary to provide the Makata Investment Services services to our customers. End users may voluntarily provide us with information they have made available on social media websites. If you provide us with any such information, we may collect publicly available information from the social media websites you have indicated. You can control how much of your information social media websites make public by visiting these websites and changing your privacy settings.

When does Makata Investment Services use customer information from third parties?

We receive some information from the third parties when you contact us. For example, when you submit your email address to us to show interest in becoming a Makata Investment Services customer, we receive information from a third party that provides automated fraud detection services to Makata Investment Services. We also occasionally collect information that is made publicly available on social media websites. You can control how much of your information social media websites make public by visiting these websites and changing your privacy settings.

Do we share the information we collect with third parties?

We may share the information that we collect, both personal and non-personal, with third parties such as advertisers, contest sponsors, promotional and marketing partners, and others who provide our content or whose products or services we think may interest you. We may also share it with our current and future affiliated companies and business partners, and if we are involved in a merger, asset sale or other business reorganization, we may also share or transfer your personal and non-personal information to our successors-in-interest. We may engage trusted third party service providers to perform functions and provide services to us, such as hosting and maintaining our servers and the website, database storage and management, e-mail management, storage marketing, credit card processing, customer service and fulfilling orders for products and services you may purchase through the website. We will likely share your personal information, and possibly some non-personal information, with these third parties to enable them to perform these services for us and for you. We may share portions of our log file data, including IP addresses, for analytics purposes with third parties such as web analytics partners, application developers, and ad networks. If your IP address is shared, it may be used to estimate general location and other technographics such as connection speed, whether you have visited the website in a shared location, and type of the device used to visit the website. They may aggregate information about our advertising and what you see on the website and then provide auditing, research and reporting for us and our advertisers. 

We may also disclose personal and non-personal information about you to government or law enforcement officials or private parties as we, in our sole discretion, believe necessary or appropriate in order to respond to claims, legal process (including subpoenas), to protect our rights and interests or those of a third party, the safety of the public or any person, to prevent or stop any illegal, unethical, or legally actionable activity, or to otherwise comply with applicable court orders, laws, rules and regulations.

Where and when is information collected from customers and end users?

Makata Investment Services will collect personal information that you submit to us. We may also receive personal information about you from third parties as described above.

How Do We Use Your Email Address?

By submitting your email address on this website, you agree to receive emails from us. You can cancel your participation in any of these email lists at any time by clicking on the opt-out link or other unsubscribe option that is included in the respective email. We only send emails to people who have authorized us to contact them, either directly, or through a third party. We do not send unsolicited commercial emails, because we hate spam as much as you do. By submitting your email address, you also agree to allow us to use your email address for customer audience targeting on sites like Facebook, where we display custom advertising to specific people who have opted-in to receive communications from us. Email addresses submitted only through the order processing page will be used for the sole purpose of sending you information and updates pertaining to your order. If, however, you have provided the same email to us through another method, we may use it for any of the purposes stated in this Policy. Note: If at any time you would like to unsubscribe from receiving future emails, we include detailed unsubscribe instructions at the bottom of each email.

How Long Do We Keep Your Information?

We keep your information only so long as we need it to provide Makata Investment Services to you and fulfill the purposes described in this policy. This is also the case for anyone that we share your information with and who carries out services on our behalf. When we no longer need to use your information and there is no need for us to keep it to comply with our legal or regulatory obligations, we’ll either remove it from our systems or depersonalize it so that we can't identify you.

How Do We Protect Your Information?

We implement a variety of security measures to maintain the safety of your personal information when you place an order or enter, submit, or access your personal information. We offer the use of a secure server. All supplied sensitive/credit information is transmitted via Secure Socket Layer (SSL) technology and then encrypted into our Payment gateway providers database only to be accessible by those authorized with special access rights to such systems, and are required to keep the information confidential. After a transaction, your private information (credit cards, social security numbers, financials, etc.) is never kept on file. We cannot, however, ensure or warrant the absolute security of any information you transmit to Makata Investment Services or guarantee that your information on the Service may not be accessed, disclosed, altered, or destroyed by a breach of any of our physical, technical, or managerial safeguards.

Could my information be transferred to other countries?

Makata Investment Services is incorporated in NO. Information collected via our website, through direct interactions with you, or from use of our help services may be transferred from time to time to our offices or personnel, or to third parties, located throughout the world, and may be viewed and hosted anywhere in the world, including countries that may not have laws of general applicability regulating the use and transfer of such data. To the fullest extent allowed by applicable law, by using any of the above, you voluntarily consent to the trans-border transfer and hosting of such information.

Is the information collected through the Makata Investment Services Service secure?

We take precautions to protect the security of your information. We have physical, electronic, and managerial procedures to help safeguard, prevent unauthorized access, maintain data security, and correctly use your information. However, neither people nor security systems are foolproof, including encryption systems. In addition, people can commit intentional crimes, make mistakes or fail to follow policies. Therefore, while we use reasonable efforts to protect your personal information, we cannot guarantee its absolute security. If applicable law imposes any non-disclaimable duty to protect your personal information, you agree that intentional misconduct will be the standards used to measure our compliance with that duty.

Can I update or correct my information?

The rights you have to request updates or corrections to the information Makata Investment Services collects depend on your relationship with Makata Investment Services. Personnel may update or correct their information as detailed in our internal company employment policies. Customers have the right to request the restriction of certain uses and disclosures of personally identifiable information as follows. You can contact us in order to (1) update or correct your personally identifiable information, (2) change your preferences with respect to communications and other information you receive from us, or (3) delete the personally identifiable information maintained about you on our systems (subject to the following paragraph), by cancelling your account. Such updates, corrections, changes and deletions will have no effect on other information that we maintain, or information that we have provided to third parties in accordance with this Privacy Policy prior to such update, correction, change or deletion. To protect your privacy and security, we may take reasonable steps (such as requesting a unique password) to verify your identity before granting you profile access or making corrections. You are responsible for maintaining the secrecy of your unique password and account information at all times. You should be aware that it is not technologically possible to remove each and every record of the information you have provided to us from our system. The need to back up our systems to protect information from inadvertent loss means that a copy of your information may exist in a non-erasable form that will be difficult or impossible for us to locate. Promptly after receiving your request, all personal information stored in databases we actively use, and other readily searchable media will be updated, corrected, changed or deleted, as appropriate, as soon as and to the extent reasonably and technically practicable. If you are an end user and wish to update, delete, or receive any information we have about you, you may do so by contacting the organization of which you are a customer.

Personnel

If you are a Makata Investment Services worker or applicant, we collect information you voluntarily provide to us. We use the information collected for Human Resources purposes in order to administer benefits to workers and screen applicants. You may contact us in order to (1) update or correct your information, (2) change your preferences with respect to communications and other information you receive from us, or (3) receive a record of the information we have relating to you. Such updates, corrections, changes and deletions will have no effect on other information that we maintain, or information that we have provided to third parties in accordance with this Privacy Policy prior to such update, correction, change or deletion.

Sale of Business

We reserve the right to transfer information to a third party in the event of a sale, merger or other transfer of all or substantially all of the assets of Makata Investment Services or any of its Corporate Affiliates (as defined herein), or that portion of Makata Investment Services or any of its Corporate Affiliates to which the Service relates, or in the event that we discontinue our business or file a petition or have filed against us a petition in bankruptcy, reorganization or similar proceeding, provided that the third party agrees to adhere to the terms of this Privacy Policy.

Affiliates

We may disclose information (including personal information) about you to our Corporate Affiliates. For purposes of this Privacy Policy, "Corporate Affiliate" means any person or entity which directly or indirectly controls, is controlled by or is under common control with Makata Investment Services, whether by ownership or otherwise. Any information relating to you that we provide to our Corporate Affiliates will be treated by those Corporate Affiliates in accordance with the terms of this Privacy Policy.

Governing Law

This Privacy Policy is governed by the laws of NO without regard to its conflict of laws provision. You consent to the exclusive jurisdiction of the courts in connection with any action or dispute arising between the parties under or in connection with this Privacy Policy except for those individuals who may have rights to make claims under Privacy Shield, or the Swiss-US framework. The laws of NO, excluding its conflicts of law rules, shall govern this Agreement and your use of the website. Your use of the website may also be subject to other local, state, national, or international laws. By using Makata Investment Services or contacting us directly, you signify your acceptance of this Privacy Policy. If you do not agree to this Privacy Policy, you should not engage with our website, or use our services. Continued use of the website, direct engagement with us, or following the posting of changes to this Privacy Policy that do not significantly affect the use or disclosure of your personal information will mean that you accept those changes.

Your Consent

We've updated our Privacy Policy to provide you with complete transparency into what is being set when you visit our site and how it's being used. By using our website, registering an account, or making a purchase, you hereby consent to our Privacy Policy and agree to its terms.

Links to Other Websites

This Privacy Policy applies only to the Services. The Services may contain links to other websites not operated or controlled by Makata Investment Services. We are not responsible for the content, accuracy or opinions expressed in such websites, and such websites are not investigated, monitored or checked for accuracy or completeness by us. Please remember that when you use a link to go from the Services to another website, our Privacy Policy is no longer in effect. Your browsing and interaction on any other website, including those that have a link on our platform, is subject to that website’s own rules and policies. Such third parties may use their own cookies or other methods to collect information about you.

Advertising

This website may contain third party advertisements and links to third party sites. Makata Investment Services does not make any representation as to the accuracy or suitability of any of the information contained in those advertisements or sites and does not accept any responsibility or liability for the conduct or content of those advertisements and sites and the offerings made by the third parties. Advertising keeps Makata Investment Services and many of the websites and services you use free of charge. We work hard to make sure that ads are safe, unobtrusive, and as relevant as possible. Third party advertisements and links to other sites where goods or services are advertised are not endorsements or recommendations by Makata Investment Services of the third party sites, goods or services. Makata Investment Services takes no responsibility for the content of any of the ads, promises made, or the quality/reliability of the products or services offered in all advertisements.

Cookies for Advertising

These cookies collect information over time about your online activity on the website and other online services to make online advertisements more relevant and effective to you. This is known as interest-based advertising. They also perform functions like preventing the same ad from continuously reappearing and ensuring that ads are properly displayed for advertisers. Without cookies, it’s really hard for an advertiser to reach its audience, or to know how many ads were shown and how many clicks they received.

Cookies

Makata Investment Services uses "Cookies" to identify the areas of our website that you have visited. A Cookie is a small piece of data stored on your computer or mobile device by your web browser. We use Cookies to enhance the performance and functionality of our website but are non-essential to their use. However, without these cookies, certain functionality like videos may become unavailable or you would be required to enter your login details every time you visit the website as we would not be able to remember that you had logged in previously. Most web browsers can be set to disable the use of Cookies. However, if you disable Cookies, you may not be able to access functionality on our website correctly or at all. We never place Personally Identifiable Information in Cookies.

Blocking and disabling cookies and similar technologies

Wherever you're located you may also set your browser to block cookies and similar technologies, but this action may block our essential cookies and prevent our website from functioning properly, and you may not be able to fully utilize all of its features and services. You should also be aware that you may also lose some saved information (e.g. saved login details, site preferences) if you block cookies on your browser. Different browsers make different controls available to you. Disabling a cookie or category of cookie does not delete the cookie from your browser, you will need to do this yourself from within your browser, you should visit your browser's help menu for more information.

Remarketing Services

We use remarketing services. What Is Remarketing? In digital marketing, remarketing (or retargeting) is the practice of serving ads across the internet to people who have already visited your website. It allows your company to seem like they're “following” people around the internet by serving ads on the websites and platforms they use most.

Kids' Privacy

We collect information from kids under the age of 13 just to better our services. If You are a parent or guardian and You are aware that Your child has provided Us with Personal Data without your permission, please contact Us. If We become aware that We have collected Personal Data from anyone under the age of 13 without verification of parental consent, We take steps to remove that information from Our servers.

Changes To Our Privacy Policy

We may change our Service and policies, and we may need to make changes to this Privacy Policy so that they accurately reflect our Service and policies. Unless otherwise required by law, we will notify you (for example, through our Service) before we make changes to this Privacy Policy and give you an opportunity to review them before they go into effect. Then, if you continue to use the Service, you will be bound by the updated Privacy Policy. If you do not want to agree to this or any updated Privacy Policy, you can delete your account.

Third-Party Services

We may display, include or make available third-party content (including data, information, applications and other products services) or provide links to third-party websites or services ("Third- Party Services"). You acknowledge and agree that Makata Investment Services shall not be responsible for any Third-Party Services, including their accuracy, completeness, timeliness, validity, copyright compliance, legality, decency, quality or any other aspect thereof. Makata Investment Services does not assume and shall not have any liability or responsibility to you or any other person or entity for any Third-Party Services. Third-Party Services and links thereto are provided solely as a convenience to you and you access and use them entirely at your own risk and subject to such third parties' terms and conditions.

Facebook Pixel

Facebook pixel is an analytics tool that allows you to measure the effectiveness of your advertising by understanding the actions people take on your website. You can use the pixel to: Make sure your ads are shown to the right people. Facebook pixel may collect information from your device when you use the service. Facebook pixel collects information that is held in accordance with its Privacy Policy

Tracking Technologies

    • Google Maps API Google Maps API is a robust tool that can be used to create a custom map, a searchable map, check-in functions, display live data synching with location, plan routes, or create a mashup just to name a few. Google Maps API may collect information from You and from Your Device for security purposes. Google Maps API collects information that is held in accordance with its Privacy Policy
    • Cookies We use Cookies to enhance the performance and functionality of our $platform but are non-essential to their use. However, without these cookies, certain functionality like videos may become unavailable or you would be required to enter your login details every time you visit the $platform as we would not be able to remember that you had logged in previously.
    • Local Storage Local Storage sometimes known as DOM storage, provides web apps with methods and protocols for storing client-side data. Web storage supports persistent data storage, similar to cookies but with a greatly enhanced capacity and no information stored in the HTTP request header.
    • Sessions $elnombre uses "Sessions" to identify the areas of our website that you have visited. A Session is a small piece of data stored on your computer or mobile device by your web browser.

Information about General Data Protection Regulation (GDPR)

We may be collecting and using information from you if you are from the European Economic Area (EEA), and in this section of our Privacy Policy we are going to explain exactly how and why is this data collected, and how we maintain this data under protection from being replicated or used in the wrong way.

What is GDPR?

GDPR is an EU-wide privacy and data protection law that regulates how EU residents' data is protected by companies and enhances the control the EU residents have, over their personal data. The GDPR is relevant to any globally operating company and not just the EU-based businesses and EU residents. Our customers’ data is important irrespective of where they are located, which is why we have implemented GDPR controls as our baseline standard for all our operations worldwide.

What is personal data?

Any data that relates to an identifiable or identified individual. GDPR covers a broad spectrum of information that could be used on its own, or in combination with other pieces of information, to identify a person. Personal data extends beyond a person’s name or email address. Some examples include financial information, political opinions, genetic data, biometric data, IP addresses, physical address, sexual orientation, and ethnicity. The Data Protection Principles include requirements such as:
  • Personal data collected must be processed in a fair, legal, and transparent way and should only be used in a way that a person would reasonably expect.
  • Personal data should only be collected to fulfil a specific purpose and it should only be used for that purpose. Organizations must specify why they need the personal data when they collect it.
  • Personal data should be held no longer than necessary to fulfil its purpose.
  • People covered by the GDPR have the right to access their own personal data. They can also request a copy of their data, and that their data be updated, deleted, restricted, or moved to another organization.

Why is GDPR important?

GDPR adds some new requirements regarding how companies should protect individuals' personal data that they collect and process. It also raises the stakes for compliance by increasing enforcement and imposing greater fines for breach. Beyond these facts it's simply the right thing to do. At Makata Investment Services we strongly believe that your data privacy is very important and we already have solid security and privacy practices in place that go beyond the requirements of this new regulation.

Individual Data Subject's Rights - Data Access, Portability and Deletion

We are committed to helping our customers meet the data subject rights requirements of GDPR. Makata Investment Services processes or stores all personal data in fully vetted, DPA compliant vendors. We do store all conversation and personal data for up to 6 years unless your account is deleted. In which case, we dispose of all data in accordance with our Terms of Service and Privacy Policy, but we will not hold it longer than 60 days. We are aware that if you are working with EU customers, you need to be able to provide them with the ability to access, update, retrieve and remove personal data. We got you! We've been set up as self service from the start and have always given you access to your data and your customers data. Our customer support team is here for you to answer any questions you might have about working with the API.

California Residents

The California Consumer Privacy Act (CCPA) requires us to disclose categories of Personal Information we collect and how we use it, the categories of sources from whom we collect Personal Information, and the third parties with whom we share it, which we have explained above. We are also required to communicate information about rights California residents have under California law. You may exercise the following rights:
  • Right to Know and Access. You may submit a verifiable request for information regarding the: (1) categories of Personal Information we collect, use, or share; (2) purposes for which categories of Personal Information are collected or used by us; (3) categories of sources from which we collect Personal Information; and (4) specific pieces of Personal Information we have collected about you.
  • Right to Equal Service. We will not discriminate against you if you exercise your privacy rights.
  • Right to Delete. You may submit a verifiable request to close your account and we will delete Personal Information about you that we have collected.
  • Request that a business that sells a consumer's personal data, not sell the consumer's personal data.
If you make a request, we have one month to respond to you. If you would like to exercise any of these rights, please contact us. We do not sell the Personal Information of our users. For more information about these rights, please contact us.

California Online Privacy Protection Act (CalOPPA)

CalOPPA requires us to disclose categories of Personal Information we collect and how we use it, the categories of sources from whom we collect Personal Information, and the third parties with whom we share it, which we have explained above. CalOPPA users have the following rights:
  • Right to Know and Access. You may submit a verifiable request for information regarding the: (1) categories of Personal Information we collect, use, or share; (2) purposes for which categories of Personal Information are collected or used by us; (3) categories of sources from which we collect Personal Information; and (4) specific pieces of Personal Information we have collected about you.
  • Right to Equal Service. We will not discriminate against you if you exercise your privacy rights.
  • Right to Delete. You may submit a verifiable request to close your account and we will delete Personal Information about you that we have collected.
  • Right to request that a business that sells a consumer's personal data, not sell the consumer's personal data.
If you make a request, we have one month to respond to you. If you would like to exercise any of these rights, please contact us. We do not sell the Personal Information of our users. For more information about these rights, please contact us.

Contact Us

Don't hesitate to contact us if you have any questions.
Save settings
Cookies settings