Passionate about FOSS and donuts

  • 8 Posts
  • 37 Comments
Joined 4 years ago
cake
Cake day: April 8th, 2021

help-circle









  • I’m not familiar with GitHub Copilot’s internal workings, copyright law, or the author of the article. However, some ideas:

    GitHub Copilot’s underlying technology probably cannot be considered artificial intelligence. At best, it can only be considered a context-aware copy-paste program. However, it probably does what it does due to the programming habits of human developers, and how we structure our code. There are established design patterns - ways to do things - that most developers follow; certain names we give to certain variables, certain design patterns that we use in a specific scenario. If you think of programming as a science, you could say that the optimum code for common scenarios for a language have probably already been written.

    Human devs’ frequent use of 1) tutorial/example/sample code of frameworks, libraries, whatnot and 2) StackOverflow code strengthens this hypothesis. Copilot is so useful (allegedly) - and blatantly copying, for example, GPL code (allegedly) - simply because a program trained on a dataset of crowdsourced, optimal solutions to problems devs face will more often than not simply take that optimal solution and suggest that solution in its entirety. There’s no better solution, right? For all I’ve heard, GitHub Copilot is built on an “AI” specializing in languages and language autocompletion. It may very well be that the “AI” simply goes, when the dev types this code, what usually comes up after? Oh, that? Let’s just suggest that then.

    There’s no real getting around this issue, as developers probably do this when they write their code too. Just use the best solution, right? However, for many algorithms, developers know how they work and implement them based on that knowledge; not because in most code the algorithm looks like this algorithm in FOSS project XYZ. They probably won’t use the same variable names too. Of course, it could be argued that the end product is the same, but the process isn’t. This is where the ethical dilemma comes up. How can we ensure that the original solvers of the problem, or task, are credited or gain some sort of material benefit? Copilot probably cannot just include the license of the code it has taken and its author when suggesting code snippets, because of how the dataset may be structured. How could it credit code snippets it uses? Is what it does ethical?

    I do agree with the article that Copilot does not currently violate copyright law of code protected by the GPL or other licenses, simply due to exceptions in the application of copyright licenses, or the fine print. I don’t know what could be a possible solution.





  • Well, honestly a lot of FOSS software has been lacking in usability in general, not even accessibility. It’s to be expected, as lots of software has basically been born from hobby projects and there is no unifying entity creating everything or defining human interface guidelines, besides perhaps GNOME and KDE.

    The thing is that there is a big emphasis in FOSS software to “implement yourself” the features needed because most work is volunteer driven. So unless someone or some organization were to fund a developer or two to implement accessibility features, they don’t magically come into being.



  • That’s very informative, thanks!

    For others curious:

    In other words, within your browser, it enables a new connection to a hosted virtual machine (VM) that emulates a physical computer’s processor. This process enables the virtual machine to run a variety of guest operating systems using your Web browser as the display monitor.

    The VM display is provided by a direct virtual network computing (VNC) connection. VNC is a graphical desktop-sharing system using the remote frame buffer protocol (RFB) to allow remote control of another computer. Multiple users may connect to the VNC server at the same time.

    A button sits in the center of the left window edge of the running distro. Click it to slide out a menu with several options for controlling the VNC display window.









  • “Apple has been opposing Right to Repair bills by claiming that their service network is the only safe repair option for consumers,” Kyle Wiens, CEO of iFixit, told Motherboard. “But the only person that is totally guaranteed to be trustworthy to fix your iPhone is you. Any time you hand your data to another entity, you risk something like this. By withholding access to service tools and forcing customers to use their third party contractor, Apple is willfully compromising the security of their customers.”





  • Great article. Quotes (emphasis added by me):

    This can be a tricky concept to understand. Class action lawsuits, where many individuals join together even though each might have suffered only a small harm, are a good conceptual analogy. Big tech companies understand the commercial benefits they can derive from analyzing the data of groups while superficially protecting the data of individuals through mathematical techniques like differential privacy. But regulators continue to focus on protecting individuals or, at best, protected classes like people of particular genders, ages, ethnicities, or sexual orientations.

    Individuals should not have to fight for their data privacy rights and be responsible for every consequence of their digital actions. Consider an analogy: people have a right to safe drinking water, but they aren’t urged to exercise that right by checking the quality of the water with a pipette every time they have a drink at the tap. Instead, regulatory agencies act on everyone’s behalf to ensure that all our water is safe. The same must be done for digital privacy: it isn’t something the average user is, or should be expected to be, personally competent to protect.

    Apple promises in one advertisement: “Right now, there is more private information on your phone than in your home. Your locations, your messages, your heart rate after a run. These are private things. And they should belong to you.” Apple is reinforcing this individualist’s fallacy: by failing to mention that your phone stores more than just your personal data, the company obfuscates the fact that the really valuable data comes from your interactions with your service providers and others. The notion that your phone is the digital equivalent of your filing cabinet is a convenient illusion. Companies actually care little about your personal data; that is why they can pretend to lock it in a box. The value lies in the inferences drawn from your interactions, which are also stored on your phone—but that data does not belong to you.